I also use tailscale and had a similar setup with nomad: https://blog.markpash.me/posts/tailscale-nomad-job/
But since then there have been some more developer-focused additions to tailscale in the form of tsnet
. Which can be used to create a listening tailscale socket in your Go app. https://pkg.go.dev/tailscale.com/tsnet
I wrote a basic TCP proxy with it to expose containerised applications to my tailnet, though the drawback is that in order to make outbound connections from your app/service to your tailnet, you need to run their daemon to enable the SOCKS proxy. This is where the OP’s solution works better. If only tailscale provided official container images so that we aren’t beholden to a third party repo, which I find has been unstable in the past.
My TCP proxy that I use as a sidecar: https://github.com/markpash/tailscale-sidecar
A simple example app that uses tsnet
to listen directly on your tailnet: https://github.com/tailscale/tailscale/blob/v1.14.3/tsnet/example/tshello/tshello.go
No problem :) Your blogs are a great example of how one might actually integrate their software/systems with their product. Perhaps these posts will result in more priority to developer-focused product features on their side. It definitely helps that the client source code is open and importable.
Just looked at your sidecar project. It’s basically the same as running the Tailscale daemon but without the possibility for outgoing connections. But that might be a feature, because I might want to publish a service to the Tailscale network, but don’t want it to be able to access nodes on the network itself.
GoBlog in and of itself would’ve been a more interesting submission, IMO. Looks like it supports ActivityPub, neat!
This was at 0 points, but without a hide or a flag? How is that possible on lobsters, as there is no downvoting?
Stories receive an implicit upvote from the submitter; they may have clicked the arrow again to remove the upvote.
This post has a lot of arguments that basically boil down to:
If your personal website turns into an “app”, you’re doing it wrong.
It makes fun of 2 sites:
Your serverless, headless, Micropub-powered personal website is unreliable precisely because you chose to introduce unnecessary complexity.
and
Here’s a recent example of a website, made up of documents, that decided to re-invent the wheel: https://slc.is/.
The first site was submitted here a month ago, got 10 upvotes and no massive pushback. It’s clearly a webdev tech demonstrating new technology, partly as a fun thing to do, partly as a tech demo, and partly as advertising.
The second has a long post about the tech behind it: https://slc.is/#Creating%20My%20Site. To me, this reads as someone who is as passionate about blogging as the author of the linked post. They should be celebrating that others want to publish on the web. But no, they’re doing it wrong. That kind of attitude grinds my gears.
Look, I’m sympathetic to the idea that people should host their own content. I mean, I do too! But this kind of gatekeeping isn’t helping that. If you create your own way of publishing , from scratch, you’ll get pushback from SSG purists. What a sad state of affairs.
We don’t need to call out everything we see wrong. We can accept a technology for what it is, and recommend only using it in certain conditions.
That’s why people didn’t push back. The discussion would get very stale, quickly.
They are working on making it possible to move the development to Gitea itself, seems like some features they need are still missing.
I never understood this, when they forked from Gogs it was already a useful and reliable platform, they could have self-hosted from the beginning.
The dual reasons the last time I talked with the devs were 1) not wanting to lose the ease of contributions from being on GitHub, and 2) concerns that Gitea’s code reviews lacked too many features (e.g., line comments). The former I’m very sympathetic to, but at this point, they really need to fix the latter.
for {
// Check if context canceled or ShutdownFunc finished
select {
case <-c.Done():
// Context canceled, return
return
default:
if done {
// ShutdownFunc finished, return
return
}
// Otherwise continue
}
looks like a spin wait and will max out a core until the function is done or the context is canceled. Or am I reading it wrongly? I try to avoid these, even if they are unlikely to last very long. e.g. a web server being attacked slowloris style might take a long time to shutdown and this spin wait would waste a lot of CPU resources on a server that might be starved anyway because of the attack. Maybe better to not have a default in the select and to use a channel instead of done bool.
Should I add a time.sleep() with a few milliseconds maybe? 🤔
Feel free to send me a better solution 😊
Here is a diff:
diff --git a/shutdown.go b/shutdown.go
index 2d02836..35d87c9 100644
--- a/shutdown.go
+++ b/shutdown.go
@@ -29,25 +29,20 @@ type ShutdownFunc func()
// Internal method
func (f ShutdownFunc) execute(c context.Context) {
- done := false
+ done := make(chan struct{})
// Execute ShutdownFunc in goroutine and set done = true
go func() {
f()
- done = true
+ close(done)
}()
- for {
- // Check if context canceled or ShutdownFunc finished
- select {
- case <-c.Done():
- // Context canceled, return
- return
- default:
- if done {
- // ShutdownFunc finished, return
- return
- }
- // Otherwise continue
- }
+ // Check if context canceled or ShutdownFunc finished
+ select {
+ case <-c.Done():
+ // Context canceled, return
+ return
+ case <- done:
+ // ShutdownFunc finished, return
+ return
}
}
One of the things I like to add to these sort of things is the ability to force kill it by repeatedly sending signals; otherwise, a hanging shutdown process is a real pain.
For example:
^C 23:36:43 INFO: Waiting for background tasks to finish; send HUP, TERM, or INT twice to force kill (may lose data!)
1 tasks: test-task ^COne more to kill…
1 tasks: test-task ^CForce killing
It also shows which tasks are running, which is kinda nice.
I’ve been meaning to clean this up and extract it to a package for ages. Either way, just some ideas for future improvements to your package :-)
One of the nice things about triggering shutdown on a certain rate of signal delivery as that you also get a small amount of protection from accidental kills.
@arp242 Done!
Wow is that completely underwhelming the text to speech support of Firefox (Linux), tried it for the first time with your “read aloud” button. I can’t even understand it, it is that bad. I’m sure I heard better text to speech software 20 years ago. What is going on here?
Edit: on ungoogled chromium the button doesn’t even work :-P
That’s probably espeak on Linux and it sounds horrible indeed. On my Android phone or on Windows, it sounds much much better.
Have you ever tried the speak feature in Firefox reader mode on Linux? Same problem there.
I switched to a logitech mx keys last year and I’m very happy with it. I did the mechanical keys thing for a bit, but it’s not for me.
I’m mariusor@metalhead.club. I mostly talk about my ActivityPub related projects.
From the people I recommend for a follow:
Daniel Stenberg of cURL fame.
Sergey Bugaev works on wayland stuff and some interesting esoteric platforms.
Drew DeVault - a lot of knowledge about various things, very little empathy for people that don’t share his opinions.
Other “open” but effectively closed systems:
SSH: Only OpenSSH matters. Even MS gave up and forked OpenSSH for Windows(and now is upstreaming their changes)
Version Control: Only 1 exists in any count that matters: Git, despite being mostly user-hostile for beginners/new people and even “seasoned” developers need cheat sheets.
Just to play devil’s advocate, there are several different implementations of Git, see: https://en.wikipedia.org/wiki/Git#Implementations, for example. So, in some sense, Git as a protocol for version control has been at least moderately successful.
yes, there are tons of various implementations of the git protocol in various languages, but.. do any of them have any use?
Microsoft’s version they use in-house is probably the only one that gets any serious use, and they are upstreaming the stuff that applies.
Pretty much all the other 3rd party implementations I’ve ever seen are either libraries that get some use(but usually use git’s C code under the hood and are mostly wrappers for a given language) or are implementations by random people trying to understand how in the world Git works, since the git UI is basically the thinnest possible wrapper around the protocol.
FreeBSD just threw in the towel and moved to Git, because they couldn’t convince anyone to use SVN any longer.
I would venture to guess trying to find any startup or tech company that doesn’t use GIT would be nearly impossible to find. Outside of that universe, there is a little usage, but that’s just because they haven’t caught up yet. I imagine they will all move to Git in the near-ish future, creating even more of a VCS monoculture.
My personal fav. VCS is Fossil, but other then SQLite, it gets basically no use.
It makes me sad, probably most developers would be better off with SVN/HG/Fossil as the UI is sane and pretty easy to understand and harder to get wrong.
It makes me sad, probably most developers would be better off with SVN/HG/Fossil as the UI is sane and pretty easy to understand and harder to get wrong.
I’m fossil-curious, but no history rewrite/rebase is a dealbreaker for me. Of course, it works for SQLite’s use case, so they’re not obligated to do a thing to bend to my whims.
I get people wanting to change history to clean stuff up, but that’s not what actually happened, you commit messy, you get messy. Auditors, governments, lawyers, etc want actual history.
That is an anti-feature for their use-case. They have contracts in defense/aerospace(forget which), that require the source be non-fungible. I’m too lazy to look up the link where they talked about it, but basically they can’t change history to keep auditors, contracts, etc happy. Sure it could be done outside of a VCS or it could be done in a different way, but that sounds like MORE work.. I’m lazy :)
I learned SVN at my apprenticeship because we use it in the company and also learned it again at university, because we used it there as well. 😅
I started coding when I was 14, around 2007 or so. [..] Fast-forward 3 years I got my first few gigs as a web developer. By then I was pretty good at HTML and CSS already, had dabbled enough with PHP to know my way around of most sticky problems I would find myself in and while I didn’t really know much of vanilla JavaScript, it was okay, because everyone used almost exclusively jQuery anyway.
Is there any other industry where a 17-year old who “dabbled enough” can land a job? I can’t shake the feeling that this is the real problem with a significant part of the industry: 18 year olds who write much of the code, guided by a 22-year old “senior developers”, all led by a 24-year old CTO.
I don’t think there’s anything wrong with React or NPM or most other frontend things, but you got to know when to apply it and when not to apply it. Youthful enthusiasm and hubris leads to the “JS framework of the week”-syndrome. Experience is not a substitute for talent, but talent is also not a substitute for experience.
I think that an industry without credentialism is a great thing. There is generally too much credentialism in the world. Not having credentials does not preclude learning and mastery.
I don’t think that “let’s not rely too much on credentials” and “let’s not have teenagers and early tweens run the company” are incompatible views.
I don’t think age is the problem. It is more, lack of experience and lack of education. Aren’t we always hearing about those who only attend a coding bootcamp and already find a job? How good is their code quality?
don’t think age is the problem. It is more, lack of experience and lack of education.
But those are strongly correlated, no?
I do suspect that age in itself does play a part; certainly if I look at myself I am now, at the age of 35, not the same person that I was when I was 17 or 25. I am more patient, more aware of risks, more humble, and less likely to be swept along in a wave of enthusiasm. Generally I think I’m more thoughtful about things (although I’m hardly perfect, and in 20 years when I’m 55 I’ll be able to list aspects where I’ve improved over 35-year old me – at least, I hope I will).
I’ve worked with a few older (30s) people who did a bootcamp and they were generally okay; their code wasn’t always perfect, but that’s okay as long as they keep learning and developing.
I think guidance is key here; there is nothing at all wrong with a 17-year old or bootcamper being employed as a programmer as such, provided they are guided by more experienced programmers. It’s this part that is often missing.
I agree, but you are conflating two things. I started programming when I was 7, so by the time I was 18 I had 11 years of experience. There are a lot of fields where that amount of experience, even if it’s as an amateur, will get you a job. 18-year-old me was painfully immature; however, and anyone who would have offered him a job really needs to re-examine their hiring process.
On the technical side, part of the problem with 18-year-old me was the problem with any autodidact. There were massive gaps in my knowledge where I didn’t realise the knowledge even existed. A few years at a university that focused on theory and hanging out with systems programmers on the side helped a lot there: the main value of a university education is that it gives you a guided tour of your ignorance. It doesn’t fix your ignorance but it does give you the tools to fix the bits that you discover you need to fix and it does show you what you could learn.
I think that’s a big part of the problem with the industry. We have a huge number of people who have absolutely no idea of the scope of their ignorance. The biggest difference between 18-year-old me and me is that I am no longer surprised to find there’s a big bit of computer science that I don’t know anything about. Until a few years ago, I didn’t know that effects systems or flow-sensitive type systems existed. Now that I do and have learned that a bunch of properties that I’d been trying to enforce dynamically can be statically verified.
…the main value of a university education is that it gives you a guided tour of your ignorance.
I’m going to steal the shit out of this quote. You nailed it. I learned to program when I was 10. Much later (after college, actually) when I decided to wanted to do this for a living I actually went back to school because I realized (through conversations with friends) that there were a ton of things I didn’t even know I didn’t know. Fast-forward a couple years and I was a much, much more effective software developer.
I think the crux of the matter is that “coding” and “developing a product” are not the same things (I use “product” in the broadest sense; it can also be something like Vim or Lobsters). Coding requires pure technical chops and talent, whereas developing a product requires much more, and many of those skills can’t easily be taught in a course.
A teenager can find a clever exploit in some application: this requires just talent. But actually writing an application and maintaining it for 20 years takes much more than just an understanding of the technical parts.
Going back to this story, this seems roughly the problem in JS (or at least one of them): there are many very smart and talented people involved, many of whom are undoubtedly smarter than I am, but a sizeable number of them also don’t seem to have the “product developing skills” that are required to build an ecosystem that’s not the pain that it is today.
I know a few designers that started very early, but it’s too not common. I have a friend that has started beekeeping at the age of 16. Most of jobs where you can start early are usually not very knowledge heavy and more based around the ability to do it. Now the question is, is programming knowledge heavy, and should it be?
I only use Cloudflare for their DNS hosting and the markup-free registrar. For real CDN uses, there is BunnyCDN.
This has already been discussed publicly by Satya/some other exec at BUILD. MSFt would fully-open source it if the IP allowed, but the IP doesn’t allow for it.
I think one of two scenarios might pan out:
That you will see large portions of Windows open sourced, and the IP-encumbered parts replaced with open source-friendly components.
Microsoft puts Windows into maintenance mode and jumps to Linux as the basis for future OSes, with open, multi-platform APIs.
Frankly, at this point, I don’t know which is more likely.
I’ve wondered about number (2) for awhile now. If they created a really nice desktop environment and GUI toolkit I bet people would adopt it like crazy, particularly given the messes that have been made of GTK and Qt.
Linux desktop use is maybe 1-2%. Why would they care about crazy adoption among that user base? Microsoft’s real competition are Android and iOS.
Sorry, my assumption would be that MS would bring a big chunk of its existing user base to (its own version of) Linux. In other words, they’d ship “Windows 11” and it would be a Linux distro under the hood the same way Apple moved to a Unix with OS X. Presumably they’d ship a compatibility layer like Apple did with Rosetta. I don’t think it will happen, there’s just too much legacy “stuff” there, but it’s an interesting thing to ponder.
I agree that it’s nice to fantasize about the idea. Though it would be a very large effort, first I assume that the Windows libraries/APIs are probably very much reliant on the Windows kernel. Secondly, Linux does not really have the driver model that hardware manufacturers are used to.
On the other had, they could just start submitting code and tests to Wine. If they make it bug for bug compatible with Windows, they wouldn’t even need to port Windows to the Linux kernel. I am really impressed how well Proton works for games, I can only imagine what Wine as a base for non-game applications if Microsoft would invest in it.
I think in practice, they will just skim profit from Windows 10 as long as they can, while extending their cross-platform strategy for applications (e.g. see the recent announcement of Electron-based Outlook). Also, it seems that they want to do another attempt at a ChromOS-like Windows with Windows 10X.
See Betteridge’s law of headlines:
Any headline that ends in a question mark can be answered by the word no.
The cover story of the January issue of the CACM was Does Facebook Use Sensitive Data for Advertising Purposes?.
The linked post doesn’t disagree with you.
can we see here that Microsoft is releasing more and more parts of Windows as open source?
Windows will probably remain a proprietary product for some time, but I can imagine that the trend of releasing more and more code will continue
This take seems quite reasonable.
By ‘no’, do you mean:
:^)
I always wonder how much all the privacy changes going into Firefox effect measured market share. Also adblock usage, which I’d (blindly) assume to be higher on Firefox than Chrome.
Mozilla has been placing ads in the German subway. (I’ve seen it in first in Hamburg, but I’ve also seen it in Cologne, Berlin and Munich) It says in German “This ad has no clue about who you are and where you’re coming from. Online-trackers do. Block them! And protect your privacy. With Firefox.” (Not my tweet, but searching for “firefox werbung u-bahn” yielded this tweet)
I feel that Mozilla is going all in on privacy. (Context: Germany is a very private society culturally and also due to its past. Also one of the country with the highest usage of firefox.)
Firefox isn’t a particularly aggressive browser on privacy though, Safari and Brave are much further ahead on this and have been for a long time. I think at this point Mozilla’s claims to the contrary are false advertising - possibly literally given that they apparently have a physical marketing campaign running in Germany. Even the big feature Mozilla are trumping in this release has already been implemented by Chrome!
While I think privacy is a big motivator for lots of people and could be a big selling point of Firefox, I think consumers correctly see that Mozilla is not especially strong on privacy. Anyway I don’t see this realistically arresting the collapse in Firefox’s market share which is reduced by something like 10% in the last six months alone (ie: from 4.26% to 3.77%). On Mozilla’s current course they will probably fall to sub-1% market share in the next couple of years.
You can dismiss this comment as biased, but I want to share my perspective as someone with a keen interest in strict privacy protections who also talks to the relevant developers first-hand. (I work on Security at Mozilla, not Privacy).
Firefox has had privacy protections like Tracking Protection, Enhanced Tracking Protection and First Party Isolation for a very, very long time. If you want aggressive privacy, you will always have to seek it for yourself. It’s seldomly in the defaults. And regardless of how effective that is, Mozilla wants to serve all users. Not just techies.
To serve all users, there’s a balance to strike with site breakage. Studies have shown that the more websites break, the less likely it is that users are going to accept the protection as a useful mechanism. In the worst case, the user will switch to a different browser that “just works”, but we’ve essentially done them a disservice. By being super strict, a vast amount of users might actually get less privacy.
So, the hard part is not being super strict on privacy (which Brave can easily do, with their techie user base), but making sure it works for your userbase. Mozilla has been able to learn from Safari’s “Intelligent Tracking Protection”, but it’s not been a pure silver bullet ready for reuse either. Safari also doesn’t have to cave in when there’s a risk of market share loss, given that they control the browser market share on iOS so tightly (aside: every browser on iOS has to use a WebKit webview. Bringing your own rendering engine is disallowed. Chrome for iOS and Firefox for iOS are using Webkit webviews)
The road to a successful implementation required many iterations, easy “report failure” buttons and lots of baking time with technical users in Firefox Beta to support major site breakage and produce meaningful bug reports.
collapse in Firefox’s market share which is reduced by something like 10% in the last six months alone (ie: from 4.26% to 3.77%)
On desktop it’s actually increased: from 7.7% last year to 8.4% this year. A lot of the decrease in total web users is probably attributable to the increase in mobile users.
Does this matter? I don’t know; maybe not. But things do seem a bit more complex than just a single 2-dimensional chart. Also, this is still millions of people: more than many (maybe even most) popular GitHub projects.
That’s reassuring in a sense but also baffling for me as Firefox on mobile is really good and can block ads via extensions so I really feel like if life was fair it would have a huge market share.
And a lot of Android phones name Chrome just “Browser”; you really need to know that there’s such a thing as “Firefox” (or indeed, any other browser) in the first place. Can’t install something you don’t know exists. This is essentially the same as the whole Windows/IE thing back in the day, eventually leading to the browserchoice.eu thing.
On iOS you couldn’t even change the default browser until quite recently, and you’re still stuck with the Safari render engine of course. As far as I can tell the only reason to run Firefox on macOS is the sync with your desktop if you use Firefox.
Also, especially when looking at world-wide stats then you need to keep in mind that not everyone is from western countries. In many developing countries people are connected to the internet (usually on mobile only) and are, on average, less tech-savvy, and concepts such as privacy as we have are also a lot less well known, partly for cultural reasons, partly for educational reasons (depending a bit on the country). If you talk to a Chinese person about the Great Firewall and the like then they usually don’t really see a problem with it. It’s hard to understate how big the cultural divide can be.
Or, a slightly amusing anecdote to illustrate this: I went on a Tinder date last year (in Indonesia), and at some point she asked me what my religion was. I said that I have no religion. She just started laughing like I said something incredibly funny. Then she then asked which God I believe in. “Well, ehh, I don’t really believe in any God”. I thought she was going to choke on laughter. Just the very idea that someone doesn’t believe in God was completely alien to her; she asked me all sorts of questions about how I could possibly not have a religion 🤷 Needless to say, I don’t talk much about my religious views here (also, because blasphemy is illegal and people have been fined and even jailed over very minor remarks). Of course, this doesn’t describe all Indonesians; I also know many who hate all this religious bullshit here (those tend to be the fun ones), but it’s not the standard attitude.
So talking about privacy on the internet and “software freedom as in free speech” is probably not too effective in places where you don’t have privacy and free speech in the first place, and where these values don’t really exist in the public consciousness, which is the majority of the world (in varying degrees).
And a lot of Android phones name Chrome just “Browser”; you really need to know that there’s such a thing as “Firefox” (or indeed, any other browser) in the first place. Can’t install something you don’t know exists. This is essentially the same as the whole Windows/IE thing back in the day, eventually leading to the browserchoice.eu thing.
Yes. And the good thing is: the EU commission is at it again. Google has been fined in 2018. Actually, new Android devices should now ask the user about the browser.
The self-destructing cookies plugin is the thing that keeps me on FireFox on Android. It’s the first sane cookie policy I’ve ever seen: When you leave a page, cookies are moved aside. Next time you visit it, all of the cookies are gone. If you lost some state that you care about (e.g. persistent login), there’s an undo button to bring them back and you can bring them back and add the site to a list that’s allowed to leave persistent cookies at the same time. I wish all browsers would make this the default policy out of the box.
e-mail has a lot of legacy cruft. Regardless of the technical merits of e-mail or Telegram or Delta Chat, Signal, matrix.org or whatever, what people need to be hearing today is “WhatsApp and Facebook Messenger are unnecessarily invasive. Everyone is moving to X.” If there isn’t a clear message on what X is, then people will just keep on using WhatsApp and Facebook Messenger.
It seems clear to me that e-mail is not the frontrunner for X, so by presenting it as a candidate for replacing WhatsApp and Facebook Messenger, I think the author is actually decreasing the likelihood that most people will migrate to a better messaging platform.
My vote is for Signal. It has good clients for Android and iOS and it’s secure. It’s also simple enough that non-technical people can use it comfortably.
Signal is a silo and I dislike silos. That’s why I post on my blog instead of Twitter. What happens when someone buys Signal, the US government forces Signal to implement backdoors or Signal runs out of donation money?
Signal isn’t perfect. My point is that Signal is better than WhatsApp and that presenting many alternatives to WhatsApp is harmful to Signal adoption. If Signal can’t reach critical mass like WhatsApp has it will fizzle out and we will be using WhatsApp again.
If Signal can’t reach critical mass like WhatsApp has it will fizzle out
Great! We don’t need more silos.
and we will be using WhatsApp again.
What about XMPP or Matrix? They can (and should!) be improved so that they are viable alternatives.
(Majority of) People don’t care about technology (how), they care about goal (why).
They don’t care if it’s Facebook, Whatsapp, Signal, Email, XMPP, they want to communicate.
Yeah, I think the point of the previous poster was that these systems should be improved to a point where they’re just really good alternatives, which includes branding and the like. Element (formerly riot.im) has the right idea on this IMHO, instead of talking about all sorts of tech details and presenting 500 clients like xmpp.org, it just says “here are the features element has, here’s how you can use it”.
Of course, die-hard decentralisation advocates don’t like this. But this is pretty much the only way you will get any serious mainstream adoption as far as I can see. Certainly none of the other approaches that have been tried over the last ~15 years worked.
…instead of talking about all sorts of tech details and presenting 500 clients like xmpp.org, it just says “here are the features element has, here’s how you can use it”.
Same problem with all the decentralized social networks and microblogging services. I was on Mastodon for a bit. I didn’t log in very often because I only followed a handful of privacy advocate types since none of my friends or other random people I followed on Twitter were on it. It was fine, though. But then they shut down the server I was on and apparently I missed whatever notification was sent out.
People always say crap like “What will you do if Twitter shuts down?”. Well, so far 100% of the federated / distributed social networks I’ve tried (I also tried that Facebook clone from way back when and then Identi.ca at some point) have shut down in one way or another and none of the conventional ones I’ve used have done so. I realize it’s a potential problem, but in my experience it just doesn’t matter.
The main feature that cannot be listed in good faith and which is the one that everybody cares about is: “It has all my friend and family on it”.
I know it’s just a matter of critical mass and if nobody switches this will never happen.
Sure, but we’re not the majority of people.. and we shouldn’t be choosing yet another silo to promote.
XMPP and (to a lesser extent) Matrix do need to be improved before they are viable alternatives, though. Signal is already there. You may feel that ideological advantages make up for the UI shortcomings, but very few nontechnical users feel the same way.
Have you tried joining a busy Matrix channel from a federated homeserver? It can take an hour. I think it needs some improvement too.
Oh, definitely. At least in the case of Matrix it’s clear that (1) the developers regard usability as an actual goal, (2) they know their usability could be improved, and (3) they’re working on improving it. I admit I don’t follow the XMPP ecosystem as closely, so the same could be the same there, but… XMPP has been around for 20 years, so what’s going to change now to make it more approachable?
[…] it will fizzle out
Great! We don’t need more silos.
Do you realize you’re cheering for keeping the WhatsApp silo?
Chat platforms have a strong network effect. We’re going to be stuck with Facebook’s network for as long as other networks are fragmented due to people disagreeing which one is the perfect one to end all other ones, and keep waiting for a pie in the sky, while all of them keep failing to reach the critical mass.
Do you realize you’re cheering for keeping the WhatsApp silo?
Uh, not sure how you pulled that out of what I said, but I’m actually cheering for the downfall of all silos.
I mean that by opposing the shift to the less-bad silo you’re not actually advancing the no-silo case, but keeping the status quo of the worst-silo.
There is currently no decentralized option that is secure, practical, and popular enough to be adopted by mainstream consumers in numbers that could beat WhatsApp.
If the choice is between WhatsApp and “just wait until we make one that is”, it means keeping WhatsApp.
They can be improved so that they are viable alternatives.
Great! We don’t need more silos.
Domain-name federation is a half-assed solution to data portability. Domain names basically need to be backed by always-on servers, not everybody can have one, and not everybody should. Either make it really P2P (Scuttlebutt?) or don’t bother.
I sadly agree, which is why logically I always end up recommend signal as ‘the best of a bad bunch’.
I like XMPP, but for true silo-avoidance you need you run your own server (or at least have someone run it under your domain, so you can move away). This sucks. It’s sort of the same with matrix.
The only way around this is real p2p as you say. So far I haven’t seen anything that I could recommend to former whatsapp users on this front however. I love scuttlebutt but I can’t see it as a good mobile solution.
Signal really needs a “web.signal.com”; typing on phones suck, and the destop app is ugh. I can’t write my own app either so I’m stuck with two bad options.
This is actually a big reason I like Telegram: the web client is pretty good.
I can’t write my own app either so I’m stuck with two bad options.
FWIW I’m involved with Whisperfish, the Signal client for Sailfish OS. There has been a constant worry about 3rd party clients, but it does seem like OWS has loosened its policy.
The current Whisperfish is written in Rust, with separate libraries for the protocol and service. OWS is also putting work into their own Rust library, which we may switch to.
Technically you can, and the risk should be quite minimal. At the end of the, as OWS doesn’t support these efforts, and if you don’t make a fool of them, availability and use increases their brand value.
Don’t want to know what happens if someone writes a horrible client and steps on their brand, so let’s be careful out there.
Oh right; that’s good to know. I just searched for “Signal API” a while ago and nothing really obvious turned up so I assumed it’s either impossible or hard/hackish. To be honest I didn’t look very deeply at it, since I don’t really care all that much about Signal that much 😅 It’s just a single not-very-active chatgroup.
Fair enough, sure. An API might sound too much like some raw web thing - it is based on HTTPS after all - but I don’t think all of it would be that simple ;)
The work gone into the libraries has not been trivial, so if you do ever find yourself caring, I hope it’ll be a happy surprise!
Is there a specific reason why? The desktop version of Telegram is butter smooth and has the same capabilities as the phone version (I’m pretty sure they’re built from the same source as well).
Security is the biggest reason for me. Every other week, you hear about a fiasco where a desktop client for some communication service had some sort of remote code execution vulnerability. But there can be other reasons as well, like them being sloppy with their .deb packages and messing up with my update manager etc. As a potential user, I see no benefit in installing a desktop client over a web client.
Security is the reason that you can’t easily have a web-based Signal client. Signal is end-to-end encrypted. In a web app, it’s impossible to isolate the keying material from whoever provides the service so it would be trivial for Signal to intercept all of your messages (even if they did the decryption client-side, they could push an update that uploads the plaintext after decryption).
It also makes targeted attacks trivial: with the mobile and desktop apps, it’s possible to publish the hash that you get for the download and compare it against the versions other people run, so that you can see if you’re running a malicious version (I hope a future version of Signal will integrate that and use it to validate updates before it installs them by checking that other users in your network see the same series of updates). With a web app, you have no way of verifying that you’re running the same code that you were one page refresh ago, let alone the same code as someone else.
A web based client has no advantages with regards to security. They are discrete topics. As a web developer, I would argue that a web based client has a significantly larger surface area for attacks.
When I say security, I don’t mean the security of my communications over that particular application. That’s important too, but it’s nothing compared to my personal computer getting hacked, which means my entire digital life getting compromised. Now you could say a web site could also hijack my entire computer by exploiting weaknesses in the browser, which is definitely a possibility, but that’s not what we hear every other week. We hear stupid zoom or slack desktop client containing a critical remote code execution vulnerability that allows a completely unrelated third party complete access to your computer.
I just don’t like opening a new window/application. Almost all of my work is done with one terminal window (in tmux, on workspace 1) and a browser (workspace 2). This works very well for me as I hate dealing with window management. Obviously I do open other applications for specific purposes (GIMP, Geeqie, etc) but I find having an extra window just to chat occasionally is annoying. Much easier to open a tab in my browser, send my message, and close it again.
A fraction of users is moving, the technically literate ones. Everyone else stays where their contacts are, or which is often the case, installs another messenger and then uses n+1.
A fraction of users is moving, the technically literate ones
I don’t think that’s what’s happening now. There have been a lot of mainstream press articles about WhatsApp. The technical users moved to Signal when Facebook bought WhatsApp, I’m now hearing non-technical folks ask what they should migrate to from WhatsApp. For example, one of our administrators recently asked about Signal because some of her family want to move their family chat there from WhatsApp.
Yeah these last two days I have been asked a few times about chat apps. I have also noticed my signal contacts list expand by quite a few contacts, and there are lots of friends/family who I would not have expected to make the switch in there. I asked one family member, a doctor, what brought her in and she said that her group of doctors on whatsapp became concerned after the recent announcements.
I wish I could recommend xmpp/OMEMO, but it’s just not as easy to set up. You can use conversations.im, and it’s a great service, but if you are worried about silos you are back to square one if you use their domain. They make using a custom domain as friction-free as possible but it still involves DNS settings.
I feel the same way about matrix etc. Most people won’t run their own instance, so you end up in a silo again.
For the closest thing to whatsapp, I have to recommend Signal. It’s not perfect, but it’s good. I wish you didn’t have to use a phone number…
What happens when someone buys Signal, the US government forces Signal to implement backdoors or Signal runs out of donation money?
Not supporting signal in any way, but how would your preferred solution actually mitigate those risks?
Many different email providers all over the world and multiple clients based on the same standards.
Anyone who has written email software used at scale by the general public can tell you that you will spend a lot of time working around servers and clients which do all sorts of weird things. Sometimes with good reasons, often times with … not so good reasons. This sucks but there’s nothing I can change about that, so I’ll need to deal with it.
Getting something basic working is pretty easy. Getting all emails handled correctly is much harder. Actually displaying all emails well even harder still. There’s tons of edge cases.
The entire system is incredibly messy, and we’re actually a few steps up from 20 years ago when it was even worse.
And we still haven’t solved the damn line wrapping problem 30 years after we identified it…
Email both proves Postel’s law correct and wrong: it’s correct in the sense that it does work, it’s wrong because it takes far more time and effort than it really needs to.
I hear you (spent a few years at an ESP). It’s still better than some siloed walled garden proprietary thing that looks pretty but could disappear for any reason in a moment. The worst of all worlds except all others.
could disappear for any reason in a moment
I’m not so worried about this; all of these services have been around for ages and I’m not seeing them disappear from one day to the next in the foreseeable future. And even if it does happen: okay, just move somewhere else. It’s not even that big of a deal.
Especially with chat services. There’s not that much to lose. Your contacts are almost always backed up elsewhere. I guess people value their chat history more than I do, however.
My vote is for Signal. It has good clients for Android and iOS and it’s secure. It’s also simple enough that non-technical people can use it comfortably.
I’ve recently started using it, and while it’s fine, I’m no fan. As @jlelse, it is another closed-off platform that you have to use, making me depend on someone else.
They seem to (as of writing) prioritize “security” over “user freedom”, which I don’t agree with. There’s the famous thread, where they reject the notion of distributing Signal over F-Droid (instead having their own special updater, in their Google-less APK). What also annoys me is that their desktop client is based on Electron, which would have been very hard for me to use before upgrading my desktop last year.
My vote is for Signal. It has good clients for Android and iOS and it’s secure. It’s also simple enough that non-technical people can use it comfortably.
What I hate about signal is that it requires a mobile phone and an associated phone number. That makes it essentially useless - I loathe mobile phones - and very suspect to me. Why can’t the desktop client actually work?
I completely agree. At the beginning of 2020 I gave up my smartphone and haven’t looked back. I’ve got a great dumb phone for voice and SMS, and the occasional photo. But now I can’t use Signal as I don’t have a mobile device to sign in to. In a word where Windows, Mac OS, Linux, Android, and iOS all exist as widely used operating systems, Signal is untenable as it only as full featured clients for two of these operating systems.
Signal isn’t perfect.
This isn’t about being perfect, this is about being accessible to everyone. It doesn’t matter how popular it becomes, I can’t use it.
They’ve been planning on fixing that for a while, I don’t know what the status is. The advantage of using mobile phone numbers is bootstrapping. My address book is already full of phone numbers for my contacts. When I installed Signal, it told me which of them are already using it. When other folks joined, I got a notification. While I agree that it’s not a great long-term strategy, it worked very well for both WhatsApp and Signal to quickly bootstrap a large connected userbase.
In contrast, most folks XMPP addresses were not the same as their email addresses and I don’t have a lot of email addresses in my address book anyway because my mail clients are all good at autocompleting them from people who have sent me mail before, so I don’t bother adding them. As a result, my Signal contact list was instantly as big as my Jabber Roster became after about six months of trying to get folks to use Jabber. The only reason Jabber was useable at all for me initially was that it was easy to run an ICQ bridge so I could bring my ICQ contacts across.
Support for using it without a phone number remains a work in progress. The introduction of PINs was a stepping stone towards that.
What I hate about signal is that it requires a mobile phone and an associated phone number.
On the bright side, Signal’s started to use UUIDs as well, so this may change. Some people may think it’s gonna be too late whenever it happens, if it does, but at least the protocols aren’t stagnant!
Theoretically interesting, but I need to know more about GoBlog—what’s the data store? SQLite? Flat files? PG/MySQL? are content pages dynamic or static? apparently it has a CMS, what’s up with that? what’s the story around asset management?—and neither the blog nor the repo list this information anywhere. I don’t want to have to read the code to figure out how all this is supposed to work.
You’re right, I need to write more documentation. But here are my simple answers:
After all it’s just the software behind my blogs / homepage / diary and I want to publish it, so others can use it too. I’ll create more documentation to make that easier.