1. 5

    Reminds me too of how on Macs the hotkey for “close this window” (Ctrl-w) is right next to the hotkey for “close this application and lose all tabs/unsaved work/context/everything” (Ctrl-q).

    At least Chrome prompts you to hold the Ctrl-q hotkey down to confirm you really meant to suddenly lose everything.

    1. 1

      Separately, now I’m imagining a CLI help text linter that looks for confusable flags (based on common keyboard layout adjacency, homoglyphs, or common confusables like ‘I’ and ‘l’) and recommends changing them.

      1. 1

        This made a lot of sense in 1984, when if you hit Cmd-Q by accident, you’d just be prompted to save unsaved changes in the document you were editing, and even if you had enough RAM to hold two or three unmodified documents open at the same time and hit Cmd-Q, it wasn’t a big deal to find and re-open them among the dozen or so documents that fit on your floppy disk.

        This all changed in the mid-to-late 90s with web-browsers, where suddenly web browser applications had “documents” that couldn’t be modified (so the “unsaved changes” dialog was no longer a protection against losing state), and you might have a dozen documents open, chosen from among uncountable billions.

        1. 1

          I don’t know about Chrome, but Safari will reopen all tabs. It’s still annoying because I might be logged out.

        1. 2

          Next up “Endless try catch”?

          You really should handle errors in every language. In Go it’s just more explicit and without special construct s. Wished more languages had that too be honest. Probably would lead to a lot less bad code because people catch the wrong thing with multiple statements in try or essentially ignore then cause they never think about them.

          But also if you don’t like it just use a different language? There’s huge amounts of languages doing error handling differently, choosing the rare exception sounds like a really odd thing to do. What’s the point in out of the thousands of languages there are choosing the one you disagree with?

          If it’s “a constant struggle” as the article mentions it seems like a strange decision to stick with it.

          1. 2

            If you can handle all errors the same way, try/catch makes that easier:

            try {
                actionA();
                actionB();
                actionC();
            } catch {
              // handle error
            }
            
            1. 5

              Sure, but I would not exactly call splitting it up hard and in Go you could if you really have the case a lot handle errors just like that. It’s easy to write such a wrapper if it bothers you much.

              In addition looking into real projects I have seen it more than once where that pattern was used a lot and the assumption that you can/want to handle all errors the same way either was wrong or became wrong.

              Depending on the language you might easily catch too much (JavaScript being a great example here) or you especially if logging or similar would happen you usually end up wanting to have more context with your error anyways.

              Of course it depends on what exactly you are doing, but at least in my experience splitting try/catch up seems to be something I do more frequently than combing error handling.

              That’s also a bit of what I mean with choosing another language. But it’s also really about project sizes. If you have a bigger project you might add helpers to do what makes sense for a project anyways, maybe doing something with the errors/exceptions so you end up extracting that error value from the catch block and essentially end up the same as in Go after that helper function.

              Of course for tiny let’s say “scripts” that might be a lot but if it’s really a tiny thing I think in many cases people completely ignore error handling.

              Don’t get me wrong though. Of course there’s a reason try catch exists, but what gets me is when people choose a language that does a few things differently and then complain about that language having a faulty, quirky design because it is not pretty much exactly like hundreds of other languages. It’s a valid design decision to keep the language simple by treating errors like just another value/type and interact with via all the same means as all the others variables you have.

              If someone programs in Go,or any other language and it’s unhappy about it not being Java (or any other language) why not use Java (or any other language)?

              It’s not like you have to use Go, cause it’s by some people currently employed by Google or something.

              And I know that there’s situations where for one reason or another you have to hat a certain language, but honestly that’s just part of the job and often you can get around it. It’s just ranting about language specifics and being like “they are doing it wrong” mostly cause they are doing it differently from your language of taste feels like something going nowhere. I mean it’s also not like nobody ever thought of doing try,/catch and I’m pretty sure that some of the Go core team developers would not only know about try/catch but eben have the knowledge to implement it if they wanted.

              So what’s the point in being the millionth or so person to say they prefer try/catch over Go’s way?

              There’s other languages, some of them sharing other things with Go. LLVM spawned huge amounts of languages in addition to huge amount of other languages. Wouldn’t time be better spent writing whatever you are missing there and be happy and productive instead of just repeating dislike and calling other designs “quirks”?

              Unless that’s your hobby and what your want to be doing of course. Just feels a bit redudant on lobster.rs

              1. 2

                In addition looking into real projects I have seen it more than once where that pattern was used a lot and the assumption that you can/want to handle all errors the same way either was wrong or became wrong.

                I completely agree. I am personally just fine with Go’s error handling. I just wanted to point out that the contrived Go example would not translate into “endless try/catch”.

          1. 4

            I feel like what we really need is a distro that is small but still uses glibc instead of musl.

            1. 3

              No, what we need is to end the glibc monoculture and make software less brittle across the board.

              1. 2

                Void is fairly small.

              1. 3

                I agree with the sentiment that you should pin all dependencies.

                But I never had the idea that SemVer would „save“ me - I only ever saw it as a means of communicating expected impact.

                1. 3

                  That’s very correct, but look at the other comments and you’ll see that it isn’t universal consensus and even suggesting it seems rather triggering to some. 🤷‍♂️

                1. 35

                  e-mail has a lot of legacy cruft. Regardless of the technical merits of e-mail or Telegram or Delta Chat, Signal, matrix.org or whatever, what people need to be hearing today is “WhatsApp and Facebook Messenger are unnecessarily invasive. Everyone is moving to X.” If there isn’t a clear message on what X is, then people will just keep on using WhatsApp and Facebook Messenger.

                  It seems clear to me that e-mail is not the frontrunner for X, so by presenting it as a candidate for replacing WhatsApp and Facebook Messenger, I think the author is actually decreasing the likelihood that most people will migrate to a better messaging platform.

                  My vote is for Signal. It has good clients for Android and iOS and it’s secure. It’s also simple enough that non-technical people can use it comfortably.

                  1. 26

                    Signal is a silo and I dislike silos. That’s why I post on my blog instead of Twitter. What happens when someone buys Signal, the US government forces Signal to implement backdoors or Signal runs out of donation money?

                    1. 10

                      Signal isn’t perfect. My point is that Signal is better than WhatsApp and that presenting many alternatives to WhatsApp is harmful to Signal adoption. If Signal can’t reach critical mass like WhatsApp has it will fizzle out and we will be using WhatsApp again.

                      1. 12

                        If Signal can’t reach critical mass like WhatsApp has it will fizzle out

                        Great! We don’t need more silos.

                        and we will be using WhatsApp again.

                        What about XMPP or Matrix? They can (and should!) be improved so that they are viable alternatives.

                        1. 13

                          (Majority of) People don’t care about technology (how), they care about goal (why).

                          They don’t care if it’s Facebook, Whatsapp, Signal, Email, XMPP, they want to communicate.

                          1. 14

                            Yeah, I think the point of the previous poster was that these systems should be improved to a point where they’re just really good alternatives, which includes branding and the like. Element (formerly riot.im) has the right idea on this IMHO, instead of talking about all sorts of tech details and presenting 500 clients like xmpp.org, it just says “here are the features element has, here’s how you can use it”.

                            Of course, die-hard decentralisation advocates don’t like this. But this is pretty much the only way you will get any serious mainstream adoption as far as I can see. Certainly none of the other approaches that have been tried over the last ~15 years worked.

                            1. 7

                              …instead of talking about all sorts of tech details and presenting 500 clients like xmpp.org, it just says “here are the features element has, here’s how you can use it”.

                              Same problem with all the decentralized social networks and microblogging services. I was on Mastodon for a bit. I didn’t log in very often because I only followed a handful of privacy advocate types since none of my friends or other random people I followed on Twitter were on it. It was fine, though. But then they shut down the server I was on and apparently I missed whatever notification was sent out.

                              People always say crap like “What will you do if Twitter shuts down?”. Well, so far 100% of the federated / distributed social networks I’ve tried (I also tried that Facebook clone from way back when and then Identi.ca at some point) have shut down in one way or another and none of the conventional ones I’ve used have done so. I realize it’s a potential problem, but in my experience it just doesn’t matter.

                              1. 4

                                The main feature that cannot be listed in good faith and which is the one that everybody cares about is: “It has all my friend and family on it”.

                                I know it’s just a matter of critical mass and if nobody switches this will never happen.

                              2. 1

                                Sure, but we’re not the majority of people.. and we shouldn’t be choosing yet another silo to promote.

                              3. 5

                                XMPP and (to a lesser extent) Matrix do need to be improved before they are viable alternatives, though. Signal is already there. You may feel that ideological advantages make up for the UI shortcomings, but very few nontechnical users feel the same way.

                                1. 1

                                  Have you tried joining a busy Matrix channel from a federated homeserver? It can take an hour. I think it needs some improvement too.

                                  1. 2

                                    Oh, definitely. At least in the case of Matrix it’s clear that (1) the developers regard usability as an actual goal, (2) they know their usability could be improved, and (3) they’re working on improving it. I admit I don’t follow the XMPP ecosystem as closely, so the same could be the same there, but… XMPP has been around for 20 years, so what’s going to change now to make it more approachable?

                                2. 4

                                  […] it will fizzle out

                                  Great! We don’t need more silos.

                                  Do you realize you’re cheering for keeping the WhatsApp silo?

                                  Chat platforms have a strong network effect. We’re going to be stuck with Facebook’s network for as long as other networks are fragmented due to people disagreeing which one is the perfect one to end all other ones, and keep waiting for a pie in the sky, while all of them keep failing to reach the critical mass.

                                  1. 1

                                    Do you realize you’re cheering for keeping the WhatsApp silo?

                                    Uh, not sure how you pulled that out of what I said, but I’m actually cheering for the downfall of all silos.

                                    1. 2

                                      I mean that by opposing the shift to the less-bad silo you’re not actually advancing the no-silo case, but keeping the status quo of the worst-silo.

                                      There is currently no decentralized option that is secure, practical, and popular enough to be adopted by mainstream consumers in numbers that could beat WhatsApp.

                                      If the choice is between WhatsApp and “just wait until we make one that is”, it means keeping WhatsApp.

                                  2. 3

                                    They can be improved so that they are viable alternatives.

                                    Debatable.

                                    Great! We don’t need more silos.

                                    Domain-name federation is a half-assed solution to data portability. Domain names basically need to be backed by always-on servers, not everybody can have one, and not everybody should. Either make it really P2P (Scuttlebutt?) or don’t bother.

                                    1. 2

                                      I sadly agree, which is why logically I always end up recommend signal as ‘the best of a bad bunch’.

                                      I like XMPP, but for true silo-avoidance you need you run your own server (or at least have someone run it under your domain, so you can move away). This sucks. It’s sort of the same with matrix.

                                      The only way around this is real p2p as you say. So far I haven’t seen anything that I could recommend to former whatsapp users on this front however. I love scuttlebutt but I can’t see it as a good mobile solution.

                                  3. 8

                                    Signal really needs a “web.signal.com”; typing on phones suck, and the destop app is ugh. I can’t write my own app either so I’m stuck with two bad options.

                                    This is actually a big reason I like Telegram: the web client is pretty good.

                                    1. 3

                                      I can’t write my own app either so I’m stuck with two bad options.

                                      FWIW I’m involved with Whisperfish, the Signal client for Sailfish OS. There has been a constant worry about 3rd party clients, but it does seem like OWS has loosened its policy.

                                      The current Whisperfish is written in Rust, with separate libraries for the protocol and service. OWS is also putting work into their own Rust library, which we may switch to.

                                      Technically you can, and the risk should be quite minimal. At the end of the, as OWS doesn’t support these efforts, and if you don’t make a fool of them, availability and use increases their brand value.

                                      Don’t want to know what happens if someone writes a horrible client and steps on their brand, so let’s be careful out there.

                                      1. 2

                                        Oh right; that’s good to know. I just searched for “Signal API” a while ago and nothing really obvious turned up so I assumed it’s either impossible or hard/hackish. To be honest I didn’t look very deeply at it, since I don’t really care all that much about Signal that much 😅 It’s just a single not-very-active chatgroup.

                                        1. 1

                                          Fair enough, sure. An API might sound too much like some raw web thing - it is based on HTTPS after all - but I don’t think all of it would be that simple ;)

                                          The work gone into the libraries has not been trivial, so if you do ever find yourself caring, I hope it’ll be a happy surprise!

                                      2. 2

                                        The Telegram desktop client is even better than the web client.

                                        1. 3

                                          I don’t like desktop clients.

                                          1. 4

                                            Is there a specific reason why? The desktop version of Telegram is butter smooth and has the same capabilities as the phone version (I’m pretty sure they’re built from the same source as well).

                                            1. 3

                                              Security is the biggest reason for me. Every other week, you hear about a fiasco where a desktop client for some communication service had some sort of remote code execution vulnerability. But there can be other reasons as well, like them being sloppy with their .deb packages and messing up with my update manager etc. As a potential user, I see no benefit in installing a desktop client over a web client.

                                              1. 4

                                                Security is the reason that you can’t easily have a web-based Signal client. Signal is end-to-end encrypted. In a web app, it’s impossible to isolate the keying material from whoever provides the service so it would be trivial for Signal to intercept all of your messages (even if they did the decryption client-side, they could push an update that uploads the plaintext after decryption).

                                                It also makes targeted attacks trivial: with the mobile and desktop apps, it’s possible to publish the hash that you get for the download and compare it against the versions other people run, so that you can see if you’re running a malicious version (I hope a future version of Signal will integrate that and use it to validate updates before it installs them by checking that other users in your network see the same series of updates). With a web app, you have no way of verifying that you’re running the same code that you were one page refresh ago, let alone the same code as someone else.

                                                1. 1

                                                  A web based client has no advantages with regards to security. They are discrete topics. As a web developer, I would argue that a web based client has a significantly larger surface area for attacks.

                                                  1. 1

                                                    When I say security, I don’t mean the security of my communications over that particular application. That’s important too, but it’s nothing compared to my personal computer getting hacked, which means my entire digital life getting compromised. Now you could say a web site could also hijack my entire computer by exploiting weaknesses in the browser, which is definitely a possibility, but that’s not what we hear every other week. We hear stupid zoom or slack desktop client containing a critical remote code execution vulnerability that allows a completely unrelated third party complete access to your computer.

                                                2. 1

                                                  I just don’t like opening a new window/application. Almost all of my work is done with one terminal window (in tmux, on workspace 1) and a browser (workspace 2). This works very well for me as I hate dealing with window management. Obviously I do open other applications for specific purposes (GIMP, Geeqie, etc) but I find having an extra window just to chat occasionally is annoying. Much easier to open a tab in my browser, send my message, and close it again.

                                        2. 3

                                          The same thing that’s happening now with whatsapp - users move.

                                          1. 2

                                            A fraction of users is moving, the technically literate ones. Everyone else stays where their contacts are, or which is often the case, installs another messenger and then uses n+1.

                                            1. 2

                                              A fraction of users is moving, the technically literate ones

                                              I don’t think that’s what’s happening now. There have been a lot of mainstream press articles about WhatsApp. The technical users moved to Signal when Facebook bought WhatsApp, I’m now hearing non-technical folks ask what they should migrate to from WhatsApp. For example, one of our administrators recently asked about Signal because some of her family want to move their family chat there from WhatsApp.

                                              1. 1

                                                Yeah these last two days I have been asked a few times about chat apps. I have also noticed my signal contacts list expand by quite a few contacts, and there are lots of friends/family who I would not have expected to make the switch in there. I asked one family member, a doctor, what brought her in and she said that her group of doctors on whatsapp became concerned after the recent announcements.

                                                I wish I could recommend xmpp/OMEMO, but it’s just not as easy to set up. You can use conversations.im, and it’s a great service, but if you are worried about silos you are back to square one if you use their domain. They make using a custom domain as friction-free as possible but it still involves DNS settings.

                                                I feel the same way about matrix etc. Most people won’t run their own instance, so you end up in a silo again.

                                                For the closest thing to whatsapp, I have to recommend Signal. It’s not perfect, but it’s good. I wish you didn’t have to use a phone number…

                                          2. 2

                                            What happens when someone buys Signal, the US government forces Signal to implement backdoors or Signal runs out of donation money?

                                            Not supporting signal in any way, but how would your preferred solution actually mitigate those risks?

                                            1. 1

                                              Many different email providers all over the world and multiple clients based on the same standards.

                                              1. 6

                                                Anyone who has written email software used at scale by the general public can tell you that you will spend a lot of time working around servers and clients which do all sorts of weird things. Sometimes with good reasons, often times with … not so good reasons. This sucks but there’s nothing I can change about that, so I’ll need to deal with it.

                                                Getting something basic working is pretty easy. Getting all emails handled correctly is much harder. Actually displaying all emails well even harder still. There’s tons of edge cases.

                                                The entire system is incredibly messy, and we’re actually a few steps up from 20 years ago when it was even worse.

                                                And we still haven’t solved the damn line wrapping problem 30 years after we identified it…

                                                Email both proves Postel’s law correct and wrong: it’s correct in the sense that it does work, it’s wrong because it takes far more time and effort than it really needs to.

                                                1. 2

                                                  I hear you (spent a few years at an ESP). It’s still better than some siloed walled garden proprietary thing that looks pretty but could disappear for any reason in a moment. The worst of all worlds except all others.

                                                  1. 2

                                                    could disappear for any reason in a moment

                                                    I’m not so worried about this; all of these services have been around for ages and I’m not seeing them disappear from one day to the next in the foreseeable future. And even if it does happen: okay, just move somewhere else. It’s not even that big of a deal.

                                                    1. 1

                                                      Especially with chat services. There’s not that much to lose. Your contacts are almost always backed up elsewhere. I guess people value their chat history more than I do, however.

                                          3. 11

                                            My vote is for Signal. It has good clients for Android and iOS and it’s secure. It’s also simple enough that non-technical people can use it comfortably.

                                            I’ve recently started using it, and while it’s fine, I’m no fan. As @jlelse, it is another closed-off platform that you have to use, making me depend on someone else.

                                            They seem to (as of writing) prioritize “security” over “user freedom”, which I don’t agree with. There’s the famous thread, where they reject the notion of distributing Signal over F-Droid (instead having their own special updater, in their Google-less APK). What also annoys me is that their desktop client is based on Electron, which would have been very hard for me to use before upgrading my desktop last year.

                                            1. 6

                                              My vote is for Signal. It has good clients for Android and iOS and it’s secure. It’s also simple enough that non-technical people can use it comfortably.

                                              What I hate about signal is that it requires a mobile phone and an associated phone number. That makes it essentially useless - I loathe mobile phones - and very suspect to me. Why can’t the desktop client actually work?

                                              1. 2

                                                I completely agree. At the beginning of 2020 I gave up my smartphone and haven’t looked back. I’ve got a great dumb phone for voice and SMS, and the occasional photo. But now I can’t use Signal as I don’t have a mobile device to sign in to. In a word where Windows, Mac OS, Linux, Android, and iOS all exist as widely used operating systems, Signal is untenable as it only as full featured clients for two of these operating systems.

                                                Signal isn’t perfect.

                                                This isn’t about being perfect, this is about being accessible to everyone. It doesn’t matter how popular it becomes, I can’t use it.

                                                1. 1

                                                  They’ve been planning on fixing that for a while, I don’t know what the status is. The advantage of using mobile phone numbers is bootstrapping. My address book is already full of phone numbers for my contacts. When I installed Signal, it told me which of them are already using it. When other folks joined, I got a notification. While I agree that it’s not a great long-term strategy, it worked very well for both WhatsApp and Signal to quickly bootstrap a large connected userbase.

                                                  In contrast, most folks XMPP addresses were not the same as their email addresses and I don’t have a lot of email addresses in my address book anyway because my mail clients are all good at autocompleting them from people who have sent me mail before, so I don’t bother adding them. As a result, my Signal contact list was instantly as big as my Jabber Roster became after about six months of trying to get folks to use Jabber. The only reason Jabber was useable at all for me initially was that it was easy to run an ICQ bridge so I could bring my ICQ contacts across.

                                                  1. 1

                                                    Support for using it without a phone number remains a work in progress. The introduction of PINs was a stepping stone towards that.

                                                  2. 1

                                                    What I hate about signal is that it requires a mobile phone and an associated phone number.

                                                    On the bright side, Signal’s started to use UUIDs as well, so this may change. Some people may think it’s gonna be too late whenever it happens, if it does, but at least the protocols aren’t stagnant!

                                                1. 5

                                                  I see all of these articles about what messenger app to use. None of those address the real problem: they don’t work for the people I talk to.

                                                  A lot of my friends are furries and we use telegram exclusively because it supports stickers. It’s impossible to understate how important stickers are for us.

                                                  The Furry Writers’ Guild, moved our Slack server to Discord. It’s been hugely popular. We couldn’t get anyone to use Slack.

                                                  My church uses Facebook. I’m not on it and I’ve effectively not been a member since March as a result.

                                                  A few of my friends use Twitter direct messages.

                                                  My family uses SMS. (And my uncle keeps trying to push everyone to LinkedIn.)

                                                  I know exactly one person on Earth who uses Signal. He’s on Telegram because he can’t get his friends to use Signal.

                                                  1. 5

                                                    I remember a time when it was MSN Messenger, YIM, AIM, ICQ or IRC and you could have one open source client that was able to provide a single interface for the lot.

                                                    1. 1

                                                      You can do the same thing with Matrix but it’s no where near as simple or accessible as Pidgin/Trillian/Audium/etc. Signal and Telegram have APIs, but Facebook and Google make it incredibly difficult to interface with their messengers.

                                                    2. 2

                                                      The Signal folks, at least, seem to understand this. It supports sending emoji, has sticker packs, and makes it very easy to share photos (as in, UI in the mobile app for taking a photo and sending it, just like WhatsApp). The big missing feature is the ability to have video / audio calls with more than two people. That said, the Signal desktop clients (unlike WhatsApp) do support video calling. This is the killer feature for some of the non-technical folks I talk to - everyone has had to have the hardware for video conferencing on a laptop to be able to join work calls while working from home during the pandemic and going from a Teams or Zoom call on a laptop (with or without an external monitor) to having to use a tiny phone screen for WhatsApp calls is frustrating. With Signal, you can call directly from the desktop app and answer calls in it. We have used that feature with my mother quite a lot over the last year.

                                                      1. 1

                                                        Signal supports video/audio calls with up to 8 people. The most recent release raised it from 5 to 8.

                                                        1. 1

                                                          Huh, looks like it was added in the middle of December. I’m a month out of date. Here’s the announcement.

                                                      2. 2

                                                        Signal has stickers. Though I guess that’s not super relevant to this discussion, since Delta Chat apparently doesn’t.

                                                      1. 12

                                                        I am slightly less suspicious of it since I saw that Loren Brichter is the author of that page. He’s a former Apple engineer who also made the Tweetie Twitter app, so not a nobody. I am still very skeptical.

                                                        1. 7

                                                          And in Tweetie, he invented the Pull To Refresh mechanic we now use a hundred times a day. He was a role model for all of us working on phone apps in the early days of the App Store.

                                                        1. 6

                                                          I think the nature of Stream is acceptable. Since it’s a midway point between RHEL minor releases, general quality should be fine. My issue is with the reduced lifetime. Stream is only maintained for the duration of RHEL Full Support, 5 years instead of 10. 5 years is the same lifetime as Debian or Ubuntu but those have mature support for in-place upgrades.

                                                          1. 1

                                                            those have mature support for in-place upgrades.

                                                            Will CentOS Stream not? (Genuine question; I think I might just be misunderstanding you.)

                                                            1. 2

                                                              I was under the impression that RHEL still had no official in place upgrades. That’s not true anymore.

                                                          1. 1

                                                            Instead of CentOS being based on RHEL, it will be the other way around? RHEL 9, for example, could be based on CentOS Stream 2021-12. So then you’d freeze there and only take the updates and patches RHEL takes for maximal compatibility? Not trying to defend IBM/Red Hat here, just trying to understand how to get back to the current state of things after the switchover, if possible.

                                                            1. 2

                                                              Stream is the branch from which the RHEL minor releases are cut, not major releases. Stream will be moving from 8.x to 8.x+1 and so on.

                                                              Fedora ELN is a testbed for the next major release of RHEL.

                                                            1. 1

                                                              An interesting read, but at some points I asked myself whether I and the author live in different worlds:

                                                              • AFAICT, he suggests putting all pods of a cluster together in a single broadcast domain and using IPv6 autodiscovery. (big broadcast domains are not a good idea)
                                                              • There actually are some very simple options for Kubernetes networking that function like this
                                                              • I think the Kubernetes networking model of having a network namespace with an IP for a pod is conceptually quite simple, certainly easier than running in the host network namespace and needing each service to bind on ports passed in an as environment variables or whatever. The alternative suggestion of using Docker-style port forwarding from the host network namespace I find even more confusing.

                                                              I do want to point out that I’m a happy user of the author’s MetalLB in our Kubernetes clusters though.

                                                              1. 16

                                                                Figuring out what you can leave out of an explanation without lying (well, misstating) is…hard. You can avoid listing every exception to something but acknowledge they exist with “normally”, “tends to”, “can be”, etc. You can say things happen without every how or why (“GC finds memory no longer in use” is true just incomplete). You can give an example not the general principle. You can always say less than you’re thinking, and if you’re still pointing the reader in the right direction they might fill it in.

                                                                And you can always take shortcuts at first, then go back and add reasons, exceptions, or whatever.

                                                                Out of programming stuff I’ve read The Rust Programming Language is on its own level: it does not assume an experienced programmer and it’s trying to get you started with traits, ownership, etc. in relatively little space. Tons of examples, generally with the goal described above the code, a piece-by-piece explanation of its semantics below, sometimes with incorrect code you might write on the way there (often to show why we want the new feature in this section). It’s worth looking at just for how it’s written.

                                                                In their own totally different way, Go’s spec, package docs, etc. are also alright at saying only what they need to. They’re reference material not intros and assume more programming vocab etc. than TRPL, but compared to a lot of docs I’ve read (or, uh, sometimes written) they aren’t too verbose and the amount of cross-reference chasing you have to do to understand stuff is tolerable.

                                                                1. 3

                                                                  I find it fascinating that the Go spec can actually work as a reference. Usually specs are only useful to implementors.

                                                                  1. 1

                                                                    I’ve heard some folks say it isn’t quite as detailed as a spec really meant to allow multiple independent implementations tends to be. Probably even more true of things like the memory model. There were/are some outside projects compiling Go (gccgo, llgo, tinygo); I wonder if the spec was mostly sufficient for the authors, or if they often had to look up what the Go project’s compiler does or puzzle out how code in the wild was depending on unspecified behavior.

                                                                1. 7

                                                                  I wish more people knew about JSON5 — it’s super convenient for when you have to hand-write JSON, because it lets you omit quotes around object keys, use single quotes, add trailing commas, add comments. I use it all the time when writing C++ unit tests for APIs that have to consume or produce JSON.

                                                                  Of all the binary alternatives I lean toward Flatbuffers. The zero-copy parsing is a huge performance win, and the closely-related Flexbuffers let you store JSON data the same way.

                                                                  1. 5

                                                                    I’ve recently come across another JSON-with-benefits format recently, Rome JSON, but it doesn’t seem to be formally specified yet.

                                                                    1. 2

                                                                      Another convenient JSON variant is the superset defined by CUE

                                                                      • C-style comments,
                                                                      • quotes may be omitted from field names without special characters,
                                                                      • commas at the end of fields are optional,
                                                                      • comma after last element in list is allowed,
                                                                      • outer curly braces are optional.
                                                                      1. 1

                                                                        JSON5 seems kind dead — several of the repos on its GitHub are archived?

                                                                        1. 1

                                                                          I implemented my own JSON5-to-JSON translator in C++. It was simpler than writing a full parser and safer than hacking on the existing JSON parser I use. It adds overhead to parsing, but I don’t use JSON5 in any performance-critical context so I don’t mind.

                                                                      1. 4

                                                                        For the non-Pythonistas here you might be interested in looking at WSGI and ASGI, two protocols similar in spirit to FastCGI, but with a tighter coupling to the host language. I find it interesting that WSGI managed to keep up and stay relevant, paving the road for ASGI, which supports WebSocket as well as HTTP/2.

                                                                        1. 8

                                                                          WSGI and ASGI aren’t alternatives to FastCGI.

                                                                          • WSGI is a Python protocol, i.e an “API” that gives you a Python dictionary representing the request. You write the response back to a Python file-like object in the dictionary.
                                                                          • CGI and FastCGI are Unix protocols.
                                                                            • CGI starts a process with a given env, and you write the response to stdout. You can write a CGI script in any language (Perl was once the favored language). You can use WSGI or not. Perl now has something analogous called PSGI I think.
                                                                            • FastCGI uses a persistent process that sends the env dictionary over a socket (in some weird binary format).

                                                                          The way I use FastCGI is to write a WSGI app (which can be done using any Python framework; I use my own framework).

                                                                          And then I use the “flup” wrapper to create a FastCGI binary.

                                                                          The project is kind of “hidden” now, but it works: https://pypi.org/project/flup/ and https://pypi.org/project/flup-py3/

                                                                          https://www.saddi.com/software/flup/

                                                                          https://www.geoffreybrown.com/blog/python-flup-and-fastcgi/

                                                                          1. 3

                                                                            Does anyone here have experience with PHP deployment? I’m curious if FastCGI (FPM) is the preferred “gateway solution” for PHP? vs. mod_php which is a shared library dynamically linked with Apache.

                                                                            Some of these links seem to suggest that this is true? You get better performance with FastCGI? That is a little surprising.

                                                                            Either way it seems like FastCGI is relatively popular with PHP, but sorta unknown in other languages? I never heard of anyone running Django or Rails with FastCGI? I think those frameworks are designed to run their own servers, and don’t play well with FastCGI, even if they can technically make a WSGI app in Django’s case.

                                                                            https://serverfault.com/questions/645755/differences-and-dis-advanages-between-fast-cgi-cgi-mod-php-suphp-php-fpm

                                                                            https://blog.layershift.com/which-php-mode-apache-vs-cgi-vs-fastcgi/

                                                                            https://stackoverflow.com/questions/3953793/mod-php-vs-cgi-vs-fast-cgi

                                                                            1. 6

                                                                              Yes! I’m using that already for many years on CentOS/Fedora. See https://developers.redhat.com/blog/2017/10/25/php-configuration-tips/ for more information from Red Hat.

                                                                              I also wrote blog posts for CentOS and Debian 10 on how I use php-fpm in production.

                                                                              1. 1

                                                                                Cool.. Is it correct to say that PHP-FPM is a C program that embeds the PHP interpreter and makes .php scripts into FastCGI apps? I’m just curious how it works.

                                                                                I think Python never developed an analogous thing, which is a shame because then there would be more shared Python hosts like there are shared PHP hosts. The closest thing is “flup”, which is not well documented (or maintained, at least at some points)

                                                                              2. 6

                                                                                mod_php still has some usage, and is still maintained, but IMO yes PHP-FPM (essentially a long lived process manager for PHP) accessed via FastCGI from a regular http server (normally apache or nginx, recently HAProxy also added support for fastcgi) is the “best” solution for now.

                                                                                mod_php will probably have a slight latency benefit, but it means Apache will uses more memory and is limited to the pre-fork worker, plus you lose a lot of flexibility (e.g. going the factcgi/fpm route you can have multiple versions of php installed side by side, you can have multiple completely different FPM instances, etc).

                                                                                1. 1

                                                                                  I don’t know if this is fixed, but mod_php used to run the PHP scripts in the same process as the web server. In a shared hosting environment, this meant that any file readable by one user was readable by scripts run by the others (for example, if you put your database password in your PHP file, someone else could write a PHP file that would read that file and show it to the user, then compromise your database). It also meant that a vulnerability in the PHP interpreter could be exploited by one user to completely take control of the web server. The big advantage of FastCGI for multi-tenant systems was the ability to run a copy of the PHP interpreter for each user, as that user.

                                                                                  1. 1

                                                                                    I don’t think “fixed” is the right term there, but regardless that is the inherent nature of mod_php, yes.

                                                                                    There was (/is, via a fork) a variant called mod_suphp that uses a setuid helper, so the process runs as the owner of the php file it’s executing.

                                                                                  2. 1

                                                                                    Cool thanks… I asked the same question in this sibling.

                                                                                    https://lobste.rs/s/xl63ah/fastcgi_forgotten_treasure#c_6u4wq3

                                                                                    Basically I want to make an “Oil-FPM” :) I think I can do that with

                                                                                    https://kristaps.bsd.lv/kcgi/

                                                                                    that wraps the Oil interpreter? And I probably need some more process management too?

                                                                                    There is no Python-FPM as far as I know, and that is a shame.

                                                                                    I want to preserve the deployment model of PHP – rsync a bunch of .PHP files. Likewise you should be able to rsync a bunch of Oil files and make a simple and fast script :)

                                                                                    Similar to what I have here if it was dynamic rather than static: http://travis-ci.oilshell.org/jobs/ That could easily be written in Oil.


                                                                                    Found the source. Woah is it true this hasn’t had a release since 2009 ???

                                                                                    https://launchpad.net/php-fpm

                                                                                    https://code.launchpad.net/php-fpm

                                                                                    https://github.com/dreamcat4/php-fpm

                                                                                    Or maybe it’s built into PHP now?

                                                                                    Ah yes looks like it is in there as of 2011, interesting: https://www.php.net/archive/2011.php#id2011-11-29-1

                                                                                    But the old source is useful. It’s about 8K lines of C and handles processes and signals! Doesn’t look too bad. If anyone wants to help integrate it into Oil let me know :)

                                                                                  3. 2

                                                                                    First of all, I’ve been out of the loop for a few years, but from ~2010-2017 Apache was falling out of favor anyway, so mod_php was out of the question if you used nginx or lighttpd. I think 2.4 brought some renewed interest in Apache, but I have no facts to back that up.

                                                                                    1. 1

                                                                                      Right, that makes sense. I think Nginx seems to encourage their own uwsgi and it doesn’t have FastCGI support?

                                                                                      The downside is that I’ve never seen a shared host that lets you “drop in the uwsgi file” like you just “drop in a .php file” or in Python’s case “drop in WSGI app wrapped by flup” ?

                                                                                      Basically Nginx doesn’t seem to support shared hosting as well as Apache? I’d be interested to hear otherwise. Dreamhost still uses Apache and the setup is pretty nice.


                                                                                      EDIT: Someone e-mailed me to clarify that uwsgi is a program that supports the FastCGI protocol in addition to the uwsgi protocol :)

                                                                                      1. 1

                                                                                        No idea, I haven’t used a shared host in many years.

                                                                                        But most of the web servers would indeed support an arbitrary FastCGI interface and if you’re allowed to run a binary you could have everything behind that webserver, just that I’ve never seen non-dynamic languages do that, Rust and Go mostly offer a webserver on their own and you just reverse-proxy through.

                                                                                    2. 2

                                                                                      From what I’ve seen mod_php is not preferred anymore. Apache is a pretty amazing swiss army knife, but it kind of has to do too much in one binary. The trend has been to use a pretty thin L7 proxy like nginx and/or haproxy to route to services.

                                                                                      I don’t know why PHP itself uses fastcgi rather than a native http implementation. Maybe it’s faster to parse? Maybe there’s better side-channels for things like remote IP when proxying?

                                                                                      A side note: I think apache was unfairly maligned and a victim of bad defaults. IIRC debian shipped it in multi-processs mode with 10 workers, but apache has pluggable mpms (multi-processing modules) so you can configure it to be epoll/thread based like nginx and be a decent file server or proxy. Unfortunately not all modules are compatible with every mpm.

                                                                                      1. 3

                                                                                        The main difference between a FastCGI backend and an HTTP backend to me is sort of accidental – in the FastCGI world, the process is known to be ephemeral like CGI, but the server keeps it alive between requests as an optimization.

                                                                                        If you lose your state, well no big deal – it was supposed to be like a CGI script.

                                                                                        But that is not true of all HTTP servers.

                                                                                        I think this matters in practice as on Dreamhost I get new FastCGI processes started every minute or 10 minutes. That is not customary for HTTP servers! (They also start 2 at a time).

                                                                                        Also I think FastCGI processes are safely killed with signals.


                                                                                        So it is true that FastCGI has a weird and somewhat unnecessary binary protocol. But it is also includes the “process” part which is useful.

                                                                                        1. 2

                                                                                          Debian shipped Apache with Apache‘s default, which is the worker MPM for 2.2 and the event MPM for 2.4.

                                                                                          But mod_php switched you to the prefork MPM because some PHP extensions are not threadsafe.

                                                                                        2. 1

                                                                                          I never heard of anyone running Django or Rails with FastCGI?

                                                                                          Way back in the earliest days, Django’s first packaged release (0.90) shipped handlers for running under mod_python, or as a WSGI application under any WSGI-compatible server, but recommended mod_python. Django 0.95 added a document explaining how to run Django behind FastCGI, and a helper module using flup as the FastCGI-to-WSGI bridge.

                                                                                          The mod_python handler was removed after Django 1.4 (so 1.5 was the first version without it). The flup/FastCGI support was removed after Django 1.8. Since then, Django has only supported running as a WSGI application.

                                                                                          I can’t speak to anyone else, but I for one have run Django in production under each of those options: mod_python, FastCGI, and pure WSGI.

                                                                                        3. 1

                                                                                          I guess the similarity is that WSGI passes data in a similar way: the dict acts as the store for environment variables and the stdin/stdout are passed as explicit values. The takeaway for me is that a simple Unix-style design can survive many years of battle testing and stay relevant!

                                                                                        4. 3

                                                                                          For context and accuracy, WSGI was not a new idea nor the only ubiquous one around in its problem space. In the WSGI proposal, Guido van Rossum clearly states that the idea was to build something modeled after Java’s Servlet API. Java was the most used programming language at the time. The servlet API is still widely used.

                                                                                          1. 2

                                                                                            The uWSGI documentation also has support for similar integrations with other languages (perl, ruby): https://uwsgi-docs.readthedocs.io/en/latest/PSGIquickstart.html

                                                                                          1. 7

                                                                                            TBH this whole article rests on the assumption that units have to be extremely small to count as unit tests. That obviously will lead to brittle and tautological tests. - so don’t do that.

                                                                                            1. 4

                                                                                              There is no formal definition of what a unit is or how small it should be, but it’s mostly accepted that it corresponds to an individual function of a module (or method of an object).

                                                                                              Testing individual methods makes little sense to me. Usually there is some protocol for using multiple methods of a class. I would not test the File class methods open, read, write, close individually for example.

                                                                                              1. 2

                                                                                                I would not test the File class methods open, read, write, close individually for example.

                                                                                                To clarify, do you mean you wouldn’t test those methods in code that uses a file class library or that you wouldn’t test each one in the library itself that defines the file class?

                                                                                                I wouldn’t (directly) test, say, Python’s pathlib library in calling code, but I would definitely test those methods individually if I were writing pathlib. (I should add that I’m an amateur, so if you have reasons not to test individually, I’d love to hear them. My question is not rhetorical.)

                                                                                                1. 2

                                                                                                  It isn’t even possible to test close without doing open, so you must do a combined test.

                                                                                                  Methods often come with expectations in what order you use them. You can only read/write/close after open. A getter returns whatever a previous setter got.

                                                                                                  1. 1

                                                                                                    Thanks, that makes sense. I didn’t focus enough on individually.

                                                                                            1. 0

                                                                                              Go is garbage collected, rust is not. That means rust is faster than go, right? No! Not always.

                                                                                              Why does it mean that, generally?

                                                                                              1. 2

                                                                                                The assumption is that if you have a garbage collector stopping the world a lot, your application is going to run slower. In small benchmarks like this (although this once was broken), it impacts the perceived performance quite a bit. In practice, while it’s debated, languages with GCs do achieve high performance and can achieve similar speeds to manually managed languages. The JVM, Julia, & Go are all great examples of this.

                                                                                                1. 0

                                                                                                  Yes, GC has costs, but so does reference counting. Making such a blanket statement is just misinformation.

                                                                                                  1. 4

                                                                                                    My (non-expert) understanding is that “gc is slow for managing heap memory” is a misconception, and than in fact the opposite is true – for allocation-heavy workloads modern gc schemes are faster than traditional malloc/free implementations.

                                                                                                    However, languages without tracing/reference counting GC tend to use heap more frugaly in the first place, and that creates the perf difference in real-world programs.

                                                                                                    1. 1

                                                                                                      The Go compiler does escape analysis and can (to some extent) avoid putting things on the heap if they do not have to be.

                                                                                                    2. 3

                                                                                                      But rust doesn’t do reference counting most of the time.

                                                                                                      1. 2

                                                                                                        Rust only does reference counting when you ask for it by typing Rc, generally yourself.

                                                                                                  1. 8

                                                                                                    Strongly endorsed.

                                                                                                    We go to great lengths to uphold the Go 1 Compatibility Promise, ensuring Go programs keep compiling and running correctly with the latest release. Backwards incompatible changes are simply not contemplated.

                                                                                                    (We are in the early days of a “Go 2” effort which might introduce backwards incompatible changes with module-level opt-in, which should allow forward progress without breaking existing code or fracturing the ecosystem.)

                                                                                                    This has a lot of costs, mostly borne by the team (as opposed to community developers), but a critical dividend is the fact that we can mostly assume people are using recent versions.

                                                                                                    I recommend assuming the Go packaged with your (non-rolling) distribution is part of the machinery that builds other packages, not something for you to consume, and to install an up to date Go if you directly use the go tool.

                                                                                                    Finally, with my security coordinator hat on, we currently only accept vulnerability reports and publish security patches for the latest two release, so while the security teams of distributions might be doing backports it’s unclear if anyone is looking for or acknowledging vulnerabilities in unsupported releases.

                                                                                                    P.S. it also occurs to me now that if distributions modify the Go compiler or standard library (for example to backport fixes) the builds they generate won’t be reproducible by others.

                                                                                                    1. 3

                                                                                                      Thank you for your work! There are way too many components that make things easier for themselves and create lots of work for their users. I find that a terrible trade-off and I really appreciate how Go does it better.

                                                                                                      A big reason that I can easily use the latest Go is that I only have to update the SDK. I don’t have to also roll out a new runtime to all my deployment targets.

                                                                                                    1. 5

                                                                                                      The first general reason to use a current version of Go is that new releases genuinely improve performance and standard library features.

                                                                                                      You’d think that should apply to all programming language implementations? 🤔

                                                                                                      1. 4

                                                                                                        Heh. Yes, pretty much. Typically there are strong reasons not to upgrade (specifically, a lack of care taken with backwards compatibility) - which IME does not apply particularly to go.

                                                                                                        1. 1

                                                                                                          Yeah, but the way Go does should really be how everyone does it.

                                                                                                        2. 1
                                                                                                          • Bug fixes and performance improvements
                                                                                                        1. 6

                                                                                                          This has the same problem as referencing a Docker image by hash:

                                                                                                          RUN apt-get install -y nginx=1.14.2-2+deb10u1
                                                                                                          

                                                                                                          There will eventually be another nginx release, and eventually the version you pinned will disappear from the repository. To make this work you’ll need to maintain an apt repository that retains every version you’ve ever pinned.

                                                                                                          The “starting point” advice at the bottom is good, however.

                                                                                                          1. 2

                                                                                                            The “starting point” advice at the bottom is good, however.

                                                                                                            I’m not sure if it is, tbh. Yes, pinning versions of packages in pip’s requirements.txt is good advice if you want to rebuild something and know it will work. But it’s bad advice if you care about security fixes, bug fixes that newer versions can bring (along with the risk of breaking changes/new bugs).

                                                                                                            There’s a balance somewhere in there between “never update anything, ever” and “edge or GTFO”, how do you find it?

                                                                                                            1. 3

                                                                                                              Finding that balance is definitely tough.

                                                                                                              I think pinning is good for reproducibility, but it’s important to bump your dependencies to whatever is getting security fixes. Unfortunately not every package has a long-term-support channel.

                                                                                                              1. 2

                                                                                                                Yes, you have to periodically re-pin at the least.

                                                                                                                Also, just having a full list of your dependencies is useful. For example, GitHub can generate security notices if you’ve pinned a known-vulnerable version. Dependabot knows how to automatically issue update PRs when new releases occur.

                                                                                                                1. 2

                                                                                                                  That’s why you need tests, and lots of them: then you can safely bump dependencies and have your CI pipelines catch issues that arise (if you are using GitHub, Dependabot’s PRs will get automatically tested).

                                                                                                                  Of course, some dependencies (those interfacing with outside systems) will be mostly mocked out in tests, so you will still need to do some manual testing if those get updated.

                                                                                                                2. 2

                                                                                                                  The longer I do this, the longer I think I need a way to get notifications from dependencies when they patch a security issue, so I can make sure to apply it.

                                                                                                                  That’s my path to get off the treadmill of edge while keeping security patches applied.

                                                                                                                  I’m a fan of the counterintuitive approach proposed by one of the go package managers - installing the oldest version that matches your dependency list instead of the newest.

                                                                                                                  This reduces the need for a lock file (older versions do not suddenly appear) and tends to give you combinations of software that are compatible with each other.

                                                                                                                  The downside is you now need a way to figure out what versions you can/should bump, and tooling to apply those changes to your dependency list. However, those have been implemented.

                                                                                                                  1. 2

                                                                                                                    GitHub will do this for you if your dependency-spec file (for any of various languages) is checked in.

                                                                                                                    For Python specifically, there are tools like pyup and requires.io that will notify you of new releases of dependencies (and which ones are security issues). I’m sure there are similar services for other languages.

                                                                                                                  2. 2

                                                                                                                    I recommend pinning everything and having automation to update those pins. A tool like https://github.com/renovatebot/renovate can automatically create pull/merge requests and merge them if the tests pass.

                                                                                                                    1. 1

                                                                                                                      There are other articles on the site about doing security updates and the like, but it’s true it’s not mentioned there. I’ll try to update it with some links tomorrow. Or, perhaps, that may be a whole new article to write…

                                                                                                                    2. 1

                                                                                                                      I like to use https://hub.docker.com/r/debian/snapshot which is based on a specific timestamp of snapshot.debian.org

                                                                                                                    1. 17

                                                                                                                      In the docs for http.Transport, . . . you can see that a zero value means that the timeout is infinite, so the connections are never closed. Over time the sockets accumulate and you end up running out of file descriptors.

                                                                                                                      This is definitely not true. You can only bump against this condition if you don’t drain and close http.Reponse.Body you get from http.Clients, but even then, you’ll hit the default MaxIdleConnsPerHost (2) and connections will cycle.

                                                                                                                      Similarly,

                                                                                                                      The solution to [nil maps] in not elegant. It’s defensive programming.

                                                                                                                      No, it’s providing a constructor for the type. The author acknowledges this, and then states

                                                                                                                      nothing prevents the user from initializing their struct with utils.Collections{} and causing a heap of issues down the line

                                                                                                                      but in Go it’s normal and expected that the zero value of a type might not be usable.

                                                                                                                      I don’t know. They’re not bright spots, but spend more time with the language and these things become clear.

                                                                                                                      1. 4

                                                                                                                        If you really want to prevent users of your library from using the {} sntax to create new objects instead of using your constructor, you can choose to not export the struct type & instead export an interface that is used as the return value type of the constructor’s function signature.

                                                                                                                        1. 10

                                                                                                                          You should basically never return interfac values, or export interfaces that will only have one implementation. There are many reasons, but my favourite one is that it needlessly breaks go-to-definition.

                                                                                                                          Instead, try to make the zero value meaningful, and if that’s not possible provide a New constructor and document it. That’s common in the standard library so all Go developers are exposed to the pattern early enough.

                                                                                                                          1. 2

                                                                                                                            Breaking go-to-definition like that is the most annoying thing about Kubernetes libraries.

                                                                                                                          2. 4

                                                                                                                            That would be pretty nonidiomatic.

                                                                                                                            1. 1

                                                                                                                              Yea this is a good approach sometimes but the indirection can be confusing.

                                                                                                                          1. 4

                                                                                                                            If the goal is to reduce bloat and install only what you need, then why not use alpine as a base instead of debian/ubuntu or centos?

                                                                                                                            1. 3

                                                                                                                              For Python specifically:

                                                                                                                              1. musl is subtly incompatible with glibc in a bunch of ways. I’ve encountered this in real world, others have as well. These bugs do get fixed, but using musl risks obscure bugs.

                                                                                                                              2. Python wheels (pre-compiled binary packages) don’t work with musl. So whereas on glibc-based distros many packages can simply be downloaded and installed, on Alpine they need to be compiled.

                                                                                                                              Long version: https://pythonspeed.com/articles/base-image-python-docker-images/

                                                                                                                              For other languages these concerns may be less applicable, e.g. Go tends not to use libc much, opting to do syscalls directly.

                                                                                                                              1. 1

                                                                                                                                The base layer that is bigger with Debian compared to Alpine is shared anyway, and you avoid any compatibility issues between glibc and musl, for example.