1.  

    I wonder if this is more or less the end of the road for the librem 5?

    This offers similar openness, same software capability and slightly better specs (I think…). And hardware privacy switches too. For half the price, assuming Chinese assembly meets your requirements. (If you need a US-manufactured device, Librem offers that for $2000-ish, and Pine does not.)

    And given the respective companies’ track records, it seems likely the ’Pro will ship in quantity well before the listed 52-week lead time on the non-US-manufactured Librem 5.

    I was on the fence between replacing my iPhone’s battery or just replacing the whole phone. I think this announcement has pushed me toward replacing the battery and revisiting in 8 - 12 months to see if this has developed into something that could be a daily driver for me.

    1. 6

      Purism is targeting a different market; they’re trying to make an ecosystem, a readymade device, that someone can use out of the box and be satisfied with. I don’t think they’re doing all too well with it, but it’s the intent that counts. What Pine does is make tinker toys for engineers. They save money on the device by punting software engineering to the users. (The battery management part made me queasy.)

      1.  

        I agree with your characterizations of the two intents. What I meant to do in my comment is question whether, given that the software works as well on the pinephone as it does on the Librem, has Pine backdoored their way into (soon) hitting that “someone can use out of the box and be satisfied with” goal for technical users better than Purism has even though they were aiming for something else entirely.

      2.  

        A big difference for me is that the L5 has privacy switches that are usable. That is, I want the camera and microphone off until I’m receiving a call, then I can flip the switch and answer. With the pinephone (and it looks like the pinephone pro) the switches are buried inside the back which make them interesting but not very usable in day-to-day life.

        Another point as mentioned in other comments is that Purism is funding the software development to make the ecosystem. Pinephone gets the benefit of that without much of the cost. I hope both succeed so that there is a gradient of capability from the low end to the high end, and a real move off of the duopoly we have now.

        1.  

          Interesting point about the switches.

          I think Pine has done better than Purism working to get drivers for their components supported by the upstream kernel. I think they’ve also done better getting help out to the various distributions when it comes to supporting the platform. By not having their own distro but getting hardware into developers’ hands, there is a whole ecosystem now. I think if it had been left to purism, you’d have one distro (PureOS) whose development is mostly done behind closed doors plus a couple nice contributions to mobile GNOME.

          In particular, they seemed to have zero interest in upstreaming the PureOS kernel patchset before Pine came along.

          I also hope both succeed, but I’m glad to see a wide-open development model making more of a play.

          1.  

            The development of PureOS appears to be done in the open; the contribution of libhandy for GNOME was essential to making most of the apps work well in that form factor, and Purism have been supportive of KDE and UbuntuTouch as well. Not sure where the impression of “zero interest in upstreaming the PureOS kernel patchset” comes from or that the pinephone had an influence on that… my impression was the opposite. It’s never fun to maintain forks of the kernel when it’s not necessary, and resources are already tight and heavily invested in the rest of the software stack.

            Purism has made a lot of missteps around communication especially with respect to shipping devices to backers. I haven’t observed any missteps around their commitment to using completely free software with no binary blobs required and getting RYF certification.

        2.  

          I can relate to the hesitance about making one of these devices your daily driver. What in particular is stopping you? Personally, I’d really want to be sure I can get ample battery life and that all my favorite apps can run, like Discord and Gmail. Obvjously, it also shouldn’t drop calls, fail to receive texts, or anything like that, either

          1.  

            Can it run Employer mandated apps? Whether you’re delivering food or an engineer, they’re a thing now. Plus whatever app your local government mandates you put on your phone to check COVID-related restrictions.

            To be honest, I think that for most people, the possibility of not owning a phone running one of the two major platforms is long gone.

            1. 8

              A couple of points are that many employers are supportive of variations of GNU/Linux. If yours isn’t, then really consider finding one that better aligns with your values.

              When governments mandate apps there must really be a push to say loudly and clearly that proprietary applications are not acceptable. Quietly accepting and using a Google or Apple device means that the line will keep shifting in the wrong direction. For many (most? really all?) there is still the possibility of not owning a phone from Google or Apple and participating fully in society. It won’t stay that way unless people demand it.

              1.  

                This comment should be boosted, especially for the fact that we are getting closer to the world where only Google or Apple is accepted. This is why I want to support Pine, even if their stuff is not ready.

                1.  

                  Of course employers are supportive of GNU/Linux - when it powers their servers. When it starts to interfere with their employees’ ability to log in to the network, review their schedule or attend meetings, you will see their support dry up quickly.

                  Not owning a Googapple phone is equivalent to not owning a phone as far as policy makers are concerned. Yes, your accessibility is considered, along with that of the elderly, homeless and poor. The notion of an employable person not owning one is increasingly alien to them.

                2.  

                  Can it run Employer mandated apps?

                  I would strongly recommend refusing to allow any company stuff on your private property. Not only is it likely to be spyware, but like, it is also not your problem to do their IT provisioning for them.

                  1.  

                    It’s not your problem to provision motor vehicles for your employer either, but for many people, using their private car for work isn’t just normal, it’s the cornerstone of their employability.

                  2.  

                    I’ve never had an employer or government mandate any mobile app. They can’t even usually mandate that you have a mobile device, unless they are providing one.

                    I know lots of people who run various apps that make their employer or government interactions more convenient, but never were they mandatory.

                    1.  

                      I’ve had an employer mandate that I either, at my option, accept their app on my device or carry their device and use it. I chose to carry two devices, but I understand why my colleagues chose to install the “mandated” apps instead.

                      1.  

                        Yeah, if they offer me a device I’m always going to take it. No work shit on personal devices ever, but also why would I not take an extra device to have around?

                    2.  

                      I don’t really have any mandated apps other than OTP authenticators, but there’s a lot I’d miss (i.e quickly sending a message on Slack, or whatever services I use for pleasure; plus stuff like decent clients for whatever service). I could go without, but it certainly wouldn’t be a daily driver.

                      What I might miss more is the stuff other than third-party apps/ecosystem: the quality of the phone and the OS itself, and if they meet my needs. I doubt Pine will make something sized like my 12 mini, or if Plasma Active/phosh will hit the same quality of mouthfeel as iOS (which as a Windows Phone 7/8 refugee, has good mouthfeel since they copied live tiles)

                      1.  

                        I’m not sure. I remember hearing one of the Linux phones supported Android apps now

                        1.  

                          I strongly suspect this “Pro” will have enough oomph to run anbox rather nicely. It runs on the current pinephones, but I don’t think it runs particularly well.

                          I don’t know how much of the sensors and other bits (that, say, a ridesharing driver’s app might need) are exposed via Anbox on a pinephone. I also don’t know how much of the google play services stack works in that environment.

                      2.  

                        Last time I checked in, the call, text and MMS functionality was just not ready for prime time. I know that’s been improving quickly, but I haven’t squinted too hard to see where it is. For me to make it a daily driver, I’d need:

                        1. Rock solid phone calls
                        2. Extremely reliable SMS/MMS receipt
                        3. Good headset support
                        4. Mostly reliable SMS/MMS sending
                        5. Very good 4G data support
                        6. Ability for another device to reliably tether/use the phone as a hotspot
                        7. A battery that goes an entire workday without needing a charge when being used for voice calls, SMS/MMS and some light data usage

                        I’ve heard 1,2,3 were not quite there. 4 is supposedly there for SMS but not MMS, which last time I looked would keep me from using it on some group threads. I believe 5 is there and suspect 6 is just fine. 7 is probably good enough given the swappable, easy-to-find battery.

                        When it comes to apps on the phone itself, GPS would be nice to have, but as long as there’s a browser that is somewhat usable, I could stand to tether an iPad or Android tablet until app coverage was robust. I prefer to work from a laptop or tablet anyway. I’d also like to have a decent camera on my phone, but that’s not a hard requirement for me to daily drive one.

                        1.  

                          As someone who has not used the sms, MMS, or voice features of any of my devices in a decade, it’s good to be reminded that some people still use these features.

                    1. 2

                      Why not try to bring these feature into the main git project? Then it would not only be an order of magnitude faster, but also reach orders of magnitudes more developers. If it is the time for Rust in Linux, it is also time for Rust in Git.

                      1. 8

                        If it is the time for Rust in Linux, it is also time for Rust in Git.

                        I thought Rust in Linux is only for drivers on a few platforms. Rust in git means you will cut off everyone that is on an arch not supported by Rust.

                        1.  

                          True, but we can’t wait forever for those platforms to either “Rust or die”. If Rust and Zig are to any extent the new infrastructure languages, and enough other good new software is being written in them, that nobody wants to rewrite in C, then it’s a bit inevitable.

                          1.  

                            Well, it’s not as though I can run Git on my Amiga anyway. There are just too many unixisms in Git.

                            1.  

                              I do know someone’s working on an AmigaOS port of libgit2. Never say never!

                              1.  

                                I know there’s an AmigaOS 4 (PowerPC) port, but not one for OS 3 (m68k), so it’s obviously nontrivial. It’s not as if Amiga developers are unfamiliar with source control; the OS itself has survived migrations all the way from RCS.

                                And, I might add, it’s just in time for m68k support being added to LLVM.

                          2. 5

                            If it is the time for Rust in Linux, it is also time for Rust in Git.

                            That seems like a rather complicated way to say that it’s not time for Rust in Git.

                            1. 2

                              I imagine they might have some justifiable hesitance to add a dependency to their builds.

                              Aside from that I agree that bringing it in, (as well as rewriting “rebase” as a command that invokes “move” so that it transparently gets faster without users having to change anything) is an obviously good idea.

                            1. 12

                              This is really cool and I love the site’s tagline, which speaks to me on so many levels:

                              Solving yesterday’s problems today

                              I find most of my open source software work and my hobbyist microelectronics work to congregate around a similar approach of improving or extending the core tech that was available in the 80s, starting with eschewing GUIs and embracing the hacker ethos. It’s unfortunately not a very lucrative proposition: that ship sailed long, long ago and there’s not much to be gained (in a literal sense, on a macro level) by embracing legacy tech over the modern life that revolves around web and mobile. Alas..

                              1. 7

                                GUIs were very much around in the 80s, and if you were embracing the hacker ethos, you would be implementing one.

                                The Oberon system is a product of the hacker ethos, or read the interview with Bill Joy linked here this week.

                                1. 3

                                  But don’t you know? The hacker ethos is “better things aren’t possible”.

                                  1. 1

                                    I should have said “modern” GUIs. I very much enjoy wiring up an OLED or eInk to a microcontroller and have written my own GUI (a thin X11-compatible thing) for PXE/USB bootable minimal systems, but there’s no denying that modern GUIs are far too disconnected from the underlying machines and hacker ethos.

                                    1. 7

                                      Why does the hacker ethos require things to be connected to the underlying machine? Can a person have the hacker ethos by delighting in the complexity and weirdness of CSS?

                                      1.  

                                        The underlying machine is just an infinite tower of turtles, anyway. “Machine code” in most modern CPUs is a facade, an emulator on a very complex microarchitecture. The design of that CPU is a Verilog file abstracted from the physical layout of gates and wires.

                                        My dad was a hardware engineer (at Zilog, AMD, Xilinx) who never learned to program; to him, all software was a vague airy-fairy abstraction.

                                        I have a friend who’s a fab engineer, who worked a long time at Intel; to him, computers are layers of silicon compounds created by huge million-dollar machines by electrochemical processes I don’t understand.

                                      2. 3

                                        There are as many hacker ethos as there are hackers. It’s a Potter Stewart sort of thing.

                                        1.  

                                          The Hacker Ethos is to decide who is not a True ScotsmanHacker

                                    2. 6

                                      Hey, I was working in tech in the 80s, and GUIs were the most awesome thing around. I’d been hacking on Apple IIs and PETs and timeshared PDP-11s, and when I read the tech reports from Xerox PARC, and Ted Nelson’s “Dream Machines”, my head exploded.

                                      If you don’t think PARC, who were inventing GUIs in the 70s, were connected to the hacker ethos, go read about their work. They literally had to build their own minicomputers out of small-scale chips because Xerox suits wouldn’t let them buy from Data General. They rolled their own programmable microcode. They wrote four or five operating systems. They hacked the first laser printer out of a big Xerox copier with a freakin’ laser wired into it.

                                      1.  

                                        Holy shit that’s awesome!

                                        1.  

                                          Believe me, it’s not possible to convey in a paragraph how awesome PARC was. Steven Levy’s classic book “Hackers” has a good account, and “Fumbling The Future” is a book-length account of PARC and how Xerox failed to commercialize much of their work.

                                    1. 1

                                      I know, I know, it’s an HN link. 😅

                                      Lobsters was founded in reaction to moderation policy at HN; many of us define our culture partially in contrast to HN.

                                      That’s why I thought this response from the lead of the moderation team there would be especially interesting to us.

                                      1. 4

                                        I will say the quality of moderation on HN went up with dang effectively replacing pg (who loved to abuse his power there, IME; i.e. shadowbanning jcs, but he was known to suppress anything that made him or YC look bad), but he’s one guy and he’s spread really thin with the front page being a high-value spam target and the site being so big. As a result, the conversations on HN can get spicy in the bad way because no one’s there to prune the poisonous branches of a comment tree. (It could be worse - /r/programming is effectively unmoderated and spam rules the day there.)

                                        1. 2

                                          I’ve (intentionally) never used HN, so I just want to offer the counter-opinion that I don’t really perceive lobsters relative to the orange site. To me, lobsters is a place to share interesting links about tech topics with people who (minus a few exceptions) are trying really hard not to be jerks, many of whom have actual experience or knowledge about the topics being posted. The orange site isn’t the only place full of jerks, or people whose loudness-to-knowledge ratio is extremely high, so maybe it’s more like: Lobsters is different relative to almost every other message board I’m aware of.

                                          That said, I read the linked comment anyway, and it was both interesting and convinced me I’m right to stay away. :)

                                        1. 25

                                          Fascinating read. Audio was the thing that made me switch from Linux to FreeBSD around 2003. A bit before then, audio was provided by OSS, which was upstream in the kernel and maintained by a company that sold drivers that plugged into the framework. This didn’t make me super happy because those drivers were really expensive. My sound card cost about £20 and the driver cost £15. My machine had an on-board thing as well, so I ended up using that when I was running Linux.

                                          A bit later, a new version of OSS came out, OSS 4, which was not released as open source. The Linux developers had a tantrum and decided to deprecate OSS and replace it with something completely new: ALSA. If your apps were rewritten to use ALSA they got new features, but if they used OSS (as everything did back then) they didn’t. There was only one feature that really mattered from a user perspective: audio mixing. I wanted two applications to be able both open the sound device and go ‘beep’. I think ALSA on Linux exposed hardware channels for mixing if your card supported it (my on-board one didn’t), OSS didn’t support it at all. I might be misremembering and ALSA supported software mixing, OSS only hardware mixing. Either way, only one OSS application could use the sound device at the time and very few things had been updated to use ASLA.

                                          GNOME and KDE both worked around this by providing userspace sound mixing. These weren’t great for latency (sound was written to a pipe, then at some point later the userspace sound daemon was scheduled and then did the mixing and wrote the output) but they were fine for going ‘bing’. There was just one problem: I wanted to use Evolution (GNOME) for mail and Psi (KDE) for chat. Only one out of the KDE and GNOME sound daemons could play sound at a time and they were incompatible. Oh, and XMMS didn’t support ALSA and so if I played music the neither of them could do audio notifications.

                                          Meanwhile, the FreeBSD team just forked the last BSD licensed OSS release and added support for OSS 4 and in-kernel low-latency sound mixing. On FreeBSD 4.x, device nodes were static so you had to configure the number of channels that it exposed but then you got /dev/dsp.0, /dev/dsp.1, and so on. I could configure XMMS and each of the GNOME and KDE sound daemons to use one of these, leaving the default /dev/dsp (a symlink to /dev/dsp.0, as I recall) for whatever ran in the foreground and wanted audio (typically BZFlag). When FreeBSD 5.0 rolled out, this manual configuration went away and you just opened /dev/dsp and got a new vchan. Nothing needed porting to use ALSA, GNOME’s sound daemon, KDE’s sound daemon, PulseAudio, or anything else: the OSS APIs just worked.

                                          It was several years before audio became reliable on Linux again and it was really only after everything was, once again, rewritten for PulseAudio. Now it’s being rewritten for PipeWire. PipeWire does have some advantages, but there’s no reason that it can’t be used as a back end for the virtual_oss thing mentioned in this article, so software written with OSS could automatically support it, rather than requiring the constant churn of the Linux ecosystem. Software written against OSS 3 20 years ago will still work unmodified on FreeBSD and will have worked every year since it was written.

                                          1. 8

                                            everything was, once again, rewritten for PulseAudio. Now it’s being rewritten for PipeWire

                                            Luckily there’s no need for such a rewrite because pipewire has a PulseAudio API.

                                            1. 1

                                              There was technically no need for a rewrite from ALSA to PulseAudio, either, because PulseAudio had an ALSA compat module.

                                              But most applications got a PulseAudio plug-in anyway because the best that could be said about the compat module is that it made your computer continue to go beep – otherwise, it made everything worse.

                                              I am slightly more hopeful for PipeWire, partly because (hopefully) some lessons have been drawn from PA’s disastrous roll-out, partly for reasons that I don’t quite know how to formulate without sounding like an ad-hominem attack (tl;dr some of the folks behind PipeWire really do know a thing or two about multimedia and let’s leave it at that). But bridging sound stacks is rarely a simple affair, and depending on how the two stacks are designed, some problems are simply not tractable.

                                              1. 2

                                                One could also say that a lot of groundwork was done by PulseAudio, revealing bugs etc so the landscape that PipeWire enters in 2021 is not the same that PulseAudio entered in 2008. For starters there’s no Arts, ESD etc. anymore, these are long dead and gone, the only thing that matters these days is the PulseAudio API and the JACK API.

                                                1. 3

                                                  I may be misremembering the timeline but as far as I remember it, aRts, ESD & friends were long dead, gone and buried by 2008, as alsa had been supporting proper (eh…) software mixing for several years by then. aRts itself stopped being developed around 2004 or so. It was definitely no longer present in KDE 4, which was launched in 2008, and while it still shipped with KDE 3, it didn’t really see much use outside KDE applications anyway. I don’t recall how things were in Gnome land, I think ESD was dropped around 2009, but pretty much everything had been ported to canberra long before then.

                                                  I, for one, don’t recall seeing either of them or using either of them after 2003, 2004 or so, but I did have some generic Intel on-board sound card, which was probably one of the first ones to get proper software mixing support on alsa, so perhaps my experience wasn’t representative.

                                                  I don’t know how many bugs PulseAudio revealed but the words “PulseAudio” and “bugs” are enough to make me stop consider going back to Linux for at least six months :-D. The way bug reports, and contributors in general, technical and non-technical alike were treated, is one of the reasons why PulseAudio’s reception was not very warm to say the least, and IMHO it’s one of the projects that kickstarted a very hostile and irresponsible attitude that prevails in many Linux-related open-source projects to this day.

                                            2. 4

                                              I might be misremembering and ALSA supported software mixing, OSS only hardware mixing.

                                              That’s more like it on Linux. ALSA did software mixing, enabled by default, in a 2005 release. So it was a pain before then (you could enable it at least as early as 2004, but it didn’t start being easy until 1.0.9 in 2005)… but long before godawful PulseAudio was even minimally usable.

                                              BSD did the right thing though, no doubt about that. Linux never learns its lesson. Now Wayland lololol.

                                              1. 4

                                                GNOME and KDE both worked around this by providing userspace sound mixing. These weren’t great for latency (sound was written to a pipe, then at some point later the userspace sound daemon was scheduled and then did the mixing and wrote the output) but they were fine for going ‘bing’.

                                                Things got pretty hilarious when you inevitably mixed an OSS app (or maybe an ALSA app, by that time? It’s been a while for me, too…) and one that used, say, aRTs (KDE’s sound daemon).

                                                What would happen is that the non-aRTs app would grab the sound device and clung to it very, very tight. The sound daemon couldn’t play anything for a while, but it kept queuing sounds. Like, say, Gaim alerts (anyone remember Gaim? I think it was still gAIM at that point, this was long before it was renamed to Pidgin).

                                                Then you’d close the non-aRTs app, and the sound daemon would get access to the sound card again, and BAM! it would dump like five minutes of gAIM alerts and application error sounds onto it, and your computer would go bing, bing, bing, bang, bing until the queue was finally empty.

                                                1. 2

                                                  I’d forgotten about that. I remember this happening when people logged out of computers: they’d quit BZFlag (yes, that’s basically what people used computers for in 2002) and log out, aRTs would get access to the sound device and write as many of the notification beeps as it could to the DSP device before it responded to the signal to quit.

                                                  ICQ-inspired systems back then really liked notification beeps. Psi would make a noise both when you sent and when you received a message (we referred to IM as bing-bong because it would go ‘bing’ when you sent a message and ‘bong’ when you received one). If nothing was draining the queue, it could really fill up!

                                                  1. 1

                                                    Then you’d close the non-aRTs app, and the sound daemon would get access to the sound card again, and BAM! it would dump like five minutes of gAIM alerts and application error sounds onto it, and your computer would go bing, bing, bing, bang, bing until the queue was finally empty.

                                                    This is exactly what happens with PulseAudio to me today, provided the applications trying to play the sounds come from different users.

                                                    Back in 2006ish though, alsa apps would mix sound, but OSS ones would queue, waiting to grab the device. I actually liked this a lot because I’d use an oss play command line program and just type up the names of files I want to play. It was an ad-hoc playlist in the shell!

                                                  2. 4

                                                    This is just an example of what the BSDs get right in general. For example, there is no world in which FreeBSD would remove ifconfig and replace it with an all-new command just because the existing code doesn’t have support for a couple of cool features - it gets patched or rewritten instead.

                                                    1. 1

                                                      I’m not sure I’d say “get right” in a global sense, but definitely it’s a matter of differing priorities. Having a stable user experience really isn’t a goal for most Linux distros, so if avoiding user facing churn is a priority, BSDs are a good place to be.

                                                      1. 1

                                                        I don’t know; the older I get the more heavily I value minimizing churn and creating a system that can be intuitively “modeled” by the brain just from exposure, i.e. no surprises. If there are architectural reasons why something doesn’t work (e.g. the git command line), I can get behind fixing it. But stuff that just works?

                                                    2. 4

                                                      I guess we can’t blame Lennart for breaking audio on Linux if it was already broken….

                                                      1. 7

                                                        You must be new around here - we never let reality get in the way of blaming Lennart :-/

                                                        1. 2

                                                          Same as with systemd, there were dozens of us where everything worked before. I mean, I mostly liked pulseaudio because it brought a few cool features, but I don’t remember sound simply stopping to work before. Sure, it was complicated to setup, but if you didn’t change anything, it simply worked.

                                                          I don’t see this as blaming. Just stating the fact that if it works for some people, it’s not broken.

                                                        2. 3

                                                          Well, can’t blame him personally, but the distros who pushed that PulseAudio trash? Absolutely yes they can be blamed. ALSA was fixed long before PA was, and like the parent post says, they could have just fixed OSS too and been done with that before ALSA!

                                                          But nah better to force everyone to constantly churn toward the next shiny thing.

                                                          1. 4

                                                            ALSA was fixed long before PA was, and like the parent post says, they could have just fixed OSS too and been done with that before ALSA!

                                                            Huh? I just setup ALSA recently and you very much had to specifically configure dmix, if that’s what you’re referring to. Here’s the official docs on software mixing. It doesn’t do anything as sophisticated as PulseAudio does by default. Not to mention that on a given restart ALSA devices frequently change their device IDs. I have a little script on a Void Linux box that I used to run as a media PC which creates the asoundrc file based on outputs from lspci. I don’t have any such issue with PulseAudio at all.

                                                            1. 3

                                                              dmix has been enabled by default since 2005 in alsa upstream. If it wasn’t on your system, perhaps your distro changed things or something. The only alsa config I’ve ever had to do is change the default device from the hdmi to analog speakers.

                                                              And yeah, it isn’t sophisticated. But I don’t care, it actually works, which is more than I can say about PulseAudio, which even to this day, has random lag and updates break the multi-user setup (which very much did not just work). I didn’t want PA but Firefox kinda forced my hand and I hate it. I should have just ditched Firefox.

                                                              Everyone tells me the pipewire is better though, but I wish it could just go back to the default alsa setup again.

                                                              1. 6

                                                                Shrug, I guess in my experience PulseAudio has “just worked” for me since 2006 or so. I admit that the initial rollout was chaotic, but ever since it’s been fine. I’ve never had random lag and my multi-user setup has never had any problems. It’s been roughly 15 years, so almost half my life, since PulseAudio has given me issues, so at this point I largely consider it stable, boring software. I still find ALSA frustrating to configure to this day, and I’ve used ALSA for even longer. Going forward I don’t think I’ll ever try to use raw ALSA ever again.

                                                            2. 1

                                                              I’m pretty sure calvin is tongue in cheek referencing that Lennart created PulseAudio as well as systemd.

                                                          2. 3

                                                            I cannot up this comment more. The migration to ALSA was a mess, and the introductions of Gstreamer*, Pulse*, or *sound_daemon fractured the system more. Things in BSD land stayed much simpler.

                                                            1. 3

                                                              I was also ‘forced’ out of Linux ecosystem because of mess in sound subsystem.

                                                              After spending some years on FreeBSD land I got hardware that was not FreeBSD supported at that moment so I tried Ubuntu … what a tragedy it was. When I was using FreeBSD I got my system run for months and rebooted only to install security updates or to upgrade. Everything just worked. Including sound. In Ubuntu land I needed to do HARD RESET every 2-3 days because sound will went dead and I could not find a way to reload/restart anything that caused that ‘glitch’.

                                                              Details here:

                                                              https://vermaden.wordpress.com/2018/09/07/my-freebsd-story/

                                                              1. 1

                                                                From time to time I try to run my DAW (Bitwig Studio) in Linux. A nice thing about using DAWs from Mac OS X is that, they just find the audio and midi sources and you don’t have to do a lot of setup. There’s a MIDI router application you can use if you want to do something complex.

                                                                Using the DAW from Linux, if it connects via ALSA or PulseAudio, mostly just works, although it won’t find my audio interface from PulseAudio. But the recommended configuration is with JACK, and despite reading the manual a couple times and trying various recommended distributions, I just can’t seem to wrap my head around it.

                                                                I should try running Bitwig on FreeBSD via the Linux compatibility layer. It’s just a Java application after all.

                                                                1. 7

                                                                  Try updating to Pipewire if your distribution supports it already. Then you get systemwide Jack compatibility with no extra configuration/effort and it doesn’t matter much which interface the app uses. Then you can route anything the way you like (audio and MIDI) with even fewer restrictions than MacOS.

                                                                  1. 1

                                                                    I’ll give that a try, thanks!

                                                              1. 8

                                                                I wish people spent even half as much effort writing about other OSes as they did Plan 9. I want to have fun on computers by getting away from the mistakes of Unix, not even closer.

                                                                1. 5

                                                                  We can only hope that the nascent wave of open hardware combined with the now widely pervasive availability of common protocols and formats will lead to new OS experimentation.

                                                                  Personally I would love a Lisp or Smalltalk OS that doesn’t have to waste mental space skirting around Unix stuff.

                                                                  1. 4

                                                                    What do you see as the mistakes of Unix?

                                                                    1. 12

                                                                      For starters, overemphasis on string parsing (from a language not really ideal for it, at that) and serialization as strings is one mistake that Plan 9 embraced further; it’s the “use more violence” conclusion to the problem.

                                                                      1. 6

                                                                        Piles of strings, which aesthetically unpleasing, are always there for you and your users. /shrug

                                                                        1. 4

                                                                          Sure, but if you’re designing the whole system from scratch, you can make sure that e.g. structured objects are always there for you and your users too.

                                                                    2. 1

                                                                      One of the reasons I find Plan 9 so interesting is that it is the only distributed operating system that I’ve ever seen really work well in practice. Do you know of any others? Every other operating system that I’ve seen in use has still followed the old UNIX / Windows / Mac OS model of each machine being its own world, where a network is just a set of separate islands with the ability to send messages to each other.

                                                                      I can illustrate this with a concrete example. I frequently work on remote machines:

                                                                      • Sometimes I’m using a lightweight laptop as a terminal but doing the actual work on a more powerful desktop.
                                                                      • Sometimes I’m working from home, but need to do the actual work on a machine in the office.
                                                                      • Sometimes I’m working in an office, but need to run commands or edit files on a server in a data center somewhere.
                                                                      • Sometimes I’m running a program that needs a powerful CPU or plenty of RAM (such as a compiler or an IDE) but need to edit files then execute the result on a low-powered embedded system.

                                                                      Today, I find that I almost always do this over SSH. This sucks because it means that I’m restricted to using programs that work with a pseudo-teletype. I’ve used Vim for a couple of decades now (and Emacs, and pico, etc.) so I’m pretty efficient at doing this… but, come on, it’s 2021 and I’m using an interface that was obsolete in the 1980s. This is ridiculous.

                                                                      There are GUI alternatives: VNC, RDP, X11 forwarding, Citrix, NX, etc. But I find them unusable over the Internet: the latency is just too painful. More importantly - and the point of this example - they still follow the same paradigm as SSH. They provide a remote desktop, rather than a remote teletype, but it’s still an attempt to provide the illusion that I’m sitting in front of the remote machine: it’s still a single machine, albeit one that’s physically distant.

                                                                      This is very different from the distributed system that Plan 9 provides. It’s the use of namespaces in Plan 9, the way I can build up a system that consists of resources provided by disparate, distributed machines, and work with them all in a uniform manner, that really distinguishes Plan 9 from anything else I’ve used.

                                                                      Occasionally, a single application provides a feature that has a similar feeling. For example, I love the way that I can use Tramp in Emacs to open a file on a remote machine, and then when I use Magit it operates on the remote git repository completely automatically and completely consistently with the way it works on local git repositories when I open a local file. It’s that kind of uniformity that Plan 9 provides, but at the level of the whole operating system, not just one feature within one application.

                                                                      I don’t know of any other operating system that works in such a distributed way. Do you? What kind of OSes would you like to see people writing about?

                                                                      1. 2

                                                                        Plan 9 shamelessly ripped off Domain in terms of its UI and system concepts. It’s hilarious how much they copied. No one really writes much about that though.

                                                                        For distributed, there was a lot of research systems (Plan 9 is one of them that just overshadowed the rest) like Sprite. That all tapered off for reasons though (see utah2k). What ended up shipping is RPC; Microsoft gets a lot of mileage out of their RPC stack since all sorts of things on the system can take advantage of it. Nowadays, devs just want REST instead of RPC… Of course, keep in mind the fallacies of distributed computing. (edit: Microsoft got a lot of the people working on RPC from Apollo, and Apollo did much of the work for distributed computing in practice on workstations in the 80s.)

                                                                        I’ve found RDP very usable over the internet, but X11 is a mess. VNC somewhere in between. Perhaps not transparent, but it is practical.

                                                                        1. 1

                                                                          By Domain do you mean the Apollo OS? I’ve never used that - I’ve heard of it as I think it had an RPC mechanism that inspired CORBA, although I might be misremembering that. I hadn’t heard that it was similar to Plan 9 in any way though: what did the Bell Labs people take from Domain?

                                                                          I’d like to read more about Sprite: I vaguely remember it being mentioned here on lobste.rs a while ago but I can’t remember any details. I’d love to see more distributed OS write-ups. It’s the thing that frustrates me most about modern computing: I have more devices than ever but still, in the third decade of the 21st century, they don’t work together properly!

                                                                          1. 2

                                                                            Yes, Domain/OS was Apollo’s.

                                                                            There’s a lot of stuff in Domain that also pops up in Plan 9 beyond distributed computing:

                                                                            • Pastel-coloured window manager (superficial)
                                                                            • How a program can take over the terminal to draw in
                                                                            • pads -> acme and the plan 9 terminal (how they de-emphasize line editing in favour of a text editor you can evaluate commands in)
                                                                            • Regexp oriented editor
                                                                            • Environment variables in symlinks

                                                                            (I’ve never seen Pike et al acknowledge the similarities.)

                                                                            But there’s also a lot they didn’t take. You might be interested in the design principles amongst other docs.

                                                                            1. 1

                                                                              Thank you for the links to those documents! That’ll be my bedtime reading for a few nights :)

                                                                              Somewhat tangentially, those similarities between Domain and Plan 9 seem a little tenuous - at least, there are well documented provenances for all of them that don’t involve Domain, so I expect they are more like convergent evolution than rip offs, but of course I don’t know for sure.

                                                                      2. 1

                                                                        Out of curiosity, are there any specific operating systems you have in mind?

                                                                      1. 7

                                                                        Great article. Similar things pop up whenever youngsters think they can replace old proven tech with $FLAVOR_OF_MONTH. NoSQL is to SQL as..

                                                                        • JSON is to XML
                                                                        • Matrix is to XMPP
                                                                        • DAB is to FM
                                                                        • Hyperloop is to rail
                                                                        1. 10

                                                                          You mean NoSQL is significantly better for its intended use? Or are you just picking really bad examples.

                                                                          JSON is to XML

                                                                          JSON is an easy-to-parse serialisation format with a well-defined object model. It has a few weaknesses (no way to serialise 64-bit integers is the big one). Most of the ‘parsing minefield’ problems are related to handling invalid JSON which is far more of a problem with XML because there are so many ways to get things wrong in the standard.

                                                                          In a language that has unicode string support, you can write a JSON parser in about 200 LoC (I have). The XML standard is very complicated and so your choices are either something that supports a subset of XML or using libxml2 because no one else can manage to write a compliant parser (and libxml2 doesn’t have a great security record). Even just correctly handling XML entities is a really hard problem and so a lot of things punt and use a subset that doesn’t permit them. Only now they’re not using XML, they’re using a new poorly specified language that happens to look like XML.

                                                                          XML does a lot more than JSON. It can be both a markup language and an object serialisation language and it allows you to build arbitrary shapes on top, but that’s also its failing. It tries to solve so many problems that it ends up being a terrible solution for any of them.

                                                                          Matrix is to XMPP

                                                                          I was involved in the XMPP process around 2002ish and for a few years. It was a complete mess. The core protocol was more or less fine (though it had some issues, including some core design choices that made it difficult to use a lot of existing XML interfaces to parse) but everything else, including account registration, avatars, encryption, audio / video calling, and file transfer were defined by multiple non-standards-track proposals, each implemented by one or two clients, many depending on features that weren’t implemented on all servers. There wasn’t a reference server implementation (well, there was. It was jabberd. No, the jabberd2 rewrite. No, ejabberd… The JSF changed their mind repeatedly) and no reference client library, so everything was interoperable at the core protocol level and nothing was interoperable at the level users cared about.

                                                                          In contrast, Matrix has a much more fully specified set of core functionality and a permissively licensed reference implementation of the client interfaces.

                                                                          DAB is to FM

                                                                          DAB uses less bandwidth and requires less transmitter power for the same audio quality than FM. DAB+ (which is now over 15 years old) moved to AAC audio. Most of the early deployment problems were caused by either turning the compression up far too high or by turning the power down to a fraction of what the FM transmitter was using. For the same power budget and aiming for the same audio quality, you can have more stations and greater range with DAB than FM.

                                                                          Hyperloop is to rail

                                                                          Okay, you can have that one.

                                                                          1. 2

                                                                            JSON is an easy-to-parse serialisation format with a well-defined object model. It has a few weaknesses (no way to serialise 64-bit integers is the big one). Most of the ‘parsing minefield’ problems are related to handling invalid JSON which is far more of a problem with XML because there are so many ways to get things wrong in the standard.

                                                                            Funny, because json.org tells me 64-bit integers are perfectly valid. In fact any size integer is valid.

                                                                            In a language that has unicode string support, you can write a JSON parser in about 200 LoC (I have).

                                                                            Don’t write your own parsers. You will get it wrong and make things an even worse mess.

                                                                            In contrast, Matrix has a much more fully specified set of core functionality

                                                                            I see there is an actual Matrix spec now. Not bad. No RFC though. But you are right that the core spec of XMPP is very barebones. You need to add lots of XEPs on top to make it useful. Modern servers and clients do this. What the Matrix people have done is take developer effort away from XMPP, fracturing the federated chat ecosystem. Yes I’m upset about this.

                                                                            One problem with Matrix is the godawful URI syntax. Instead of being able to say user@example.com like every other protocol, the Matrix devs in their junior wisdom decided instead to go with @user:example.com instead. How do I link to my Matrix account from my website? If things were sensible it would just be matrix:user@example.com. Perhaps matrix:@user:example.com? Or should my OS just know that the protocol “@user” means Matrix? Who knows.

                                                                            All this is without getting into perhaps Matrix’ biggest problem: resource use.

                                                                            DAB uses less bandwidth and requires less transmitter power for the same audio quality than FM

                                                                            Yes, this is all true. But you’re also throwing out one of the main points of broadcast radio: to be able to reach the masses, especially in times of crisis. There are ways of retrofitting FM with digital subcarriers such that existing receivers don’t become paperweights. Because it is FM, you can use GMSK which has quite nice Eb/N0 behavior. Not as nice as OFDM used by DAB but eh.. Good enough.

                                                                            edit: I realized I’m wrong about the modulation. It’s always going to be X-over-FM where X is any modulation. It must always run above the stereo pilot wave. Said pilot wave may be omitted, giving mono FM and more bandwidth for subcarriers.

                                                                            Anyway, there’s been debate around this in Sweden and the only people who want DAB are the people selling DAB receivers. The broadcasting people don’t want it, the people running the transmitters don’t want it and there is zero pressure from the public.

                                                                            1.  

                                                                              True, the radio itself is dying slowly, investing to for more channels doesn’t really make sense for the consumer, or producer.

                                                                          2. 9

                                                                            XML is certainly proven harmful by its list of exploits for a serialization format, of all things. JSON doesn’t have that issue.

                                                                            1. 6

                                                                              You mean things like the billion laughs attack? That’s not enabled by default in any modern XML parser. JSON has its own set of parsing nightmares, and lacks a standardized way of writing schemas or handling extensions. On top of that you have things like SOAP, XSLT, XPATH and so on, all standardized.

                                                                              1. 4

                                                                                Do people write new SOAP APIs anymore? Not sure who is also using XPath or XSLT.

                                                                                IMHO, XML is a good document format, but has a lot of ambiguity for serialization (i.e. attributes or elements?).

                                                                                1. 2

                                                                                  Do people write new SOAP APIs anymore?

                                                                                  The EU does, as does many parts of the Swedish government.

                                                                                  IMHO, XML is a good document format, but has a lot of ambiguity for serialization (i.e. attributes or elements?).

                                                                                  This is a bit of a strange one with XML I agree. Attributes have two useful properties however: there cannot be more than one of each and they don’t nest. This could be enforced on elements with a schema, but that came later..

                                                                                2. 4

                                                                                  I’m not aware of any JSON parsing nightmares, could you elaborate?

                                                                                  1. 9

                                                                                    This article posted on this very site a day or two ago: Parsing JSON is a Minefield

                                                                                    1. 12

                                                                                      If parsing JSON is a minefield, parsing XML is a smoking crater.

                                                                                      Look XML is fine as a document description language, but it’s crazy to pretend like it is somehow a superior ancestor to JSON. JSON and XML just do different things. JSON is a minimal, all purpose serialization format. XML is a document description language. You can of course cram anything into anything else, but those are different jobs and are best treated separately.

                                                                                      1. 5

                                                                                        And now we have things like JWT, where instead of DoS via (effectively this is what entity-expansion is) zip bombing, we can just jump straight to “you don’t need to check my credentials, I’m allowed to do admin things” attacks.

                                                                                        Like it or not, JSON the format is being transformed into JSON the protocol stack, with all the trouble that implies. Just as XML the format was turned into XML the protocol stack in the last age.

                                                                                        1. 5

                                                                                          JWT is just poorly designed, over and above its serialization format. But as bad as it is, it is significantly more sane than whatever the SAML people were thinking. To be fair though, both JSON and XML are better than ASN.1. In all cases, the secure protocol implementers chose an off the shelf serialization format which was a significant mistake for something that needs totally different security properties than ordinary serialization. One would hope that the next scheme to come along won’t do this, but I’m guessing it will just be signed protobuffs or some such, and the same problems will occur.

                                                                                        2. 3

                                                                                          billion laughs

                                                                                          I already addressed this.

                                                                                          XML is mature and does everything JSON does and more. Its maturity is evident in the way JSON people try to reinvent everything XML can already do. From a langsec perspective the only thing JSON has going for it is that it is context-free. There are XML dialects that have this property as well, if I remember correctly.

                                                                                          1. 2

                                                                                            does everything JSON does and more.

                                                                                            My suggestion is that “and more” is bad.

                                                                                            1. 1

                                                                                              Tooling is good actually. And as I said to the other person, JSON people are busy reinventing most tools that already exist for XML.

                                                                                              1. 1

                                                                                                JSON people are busy reinventing most tools that already exist for XML

                                                                                                Are they? Things I never use: JSON Schema (just adds noise to an internal project; can’t force it on an external one); JPath (your data should not be nested enough to need this); code generators beyond https://mholt.github.io/json-to-go/ (if your code can be autogenerated, it is a pointless middle layer and should be dropped); anywhere you’d use SAX with XML, you can probably use ND-JSON instead; XSLT is a weird functional templating language (don’t need another templating language, thanks)… Is there something I’m missing? I mean, the internet is big, and people reinvent everything, but I can’t say that there are XML tools that I’m jealous of.

                                                                                                Maybe we’re in different domains though. I just can’t really imagine having a job where I’m confused about whether to use XML or JSON. The closest is today I saw https://github.com/portabletext/portabletext which is a flavor of JSON for rich text. But I think that project is mistaken and it should just define a sane subset of HTML it supports instead of creating a weird mapping from HTML to JSON.

                                                                                                1.  

                                                                                                  Things I never use

                                                                                                  Yes,you never use them. But there are people who try to write protocols using JSON and they just end up reinventing XML, poorly. This means yet another dependency for everyone to pull in. Someone using JSON in their proprietary web app matters little. Someone baking it into an RFC matters a lot.

                                                                              1. 14

                                                                                Daily reminder that CDE was not actually that good and vendors adopting it were basically giving up on desktop. CDE+Motif is design by committee mediocrity, it’s like the DMV designed a desktop. Many actions are painfully obtuse under it - try adding a launcher in the big toolbar for instance. The only people who remember it fondly are people whose experience to it are limited to poking it for a few minutes or seeing it in magazines. It’s incredible how much microcomputer GUIs outclassed most workstations in terms of ease of use and quality of APIs.

                                                                                If you’re interested in “Unix workstation but they tried a little harder on UI”, Open Look and the IRIX desktop are far more interesting. Open Look in particular is very interesting because it’s a direct descendant and inheritor of the Xerox GUI legacy.

                                                                                1. 6

                                                                                  Yep, and the author of this acknowledges that and describes some of the improvements that come from being built on FVWM.

                                                                                  I’ve never used IRIX before, I do have some regrets about that.

                                                                                  1. 4

                                                                                    So… I used my share of HP/UX, Solaris, and AIX from ’93 - ‘03. A lot more than poking it for a few minutes. And I don’t recall seeing much of it in magazines of the era, which were predominantly focused on Mac and on NT. I remember the aesthetic fondly. Not the behavior.

                                                                                    My fond recollections of the aesthetic were enough for me to create a new account on my workstation and install this. It’s faithfully reproducing the behavior as well as the aesthetic, and it seems worse now in contrast to some of the things that have come along since the early 90s. I’m super impressed by what this project did with the tools they chose, but after using it for about an hour, I was quite happy to log out and switch back to my usual qtile setup. This was cool and looks great and I’d be astonished if more than a dozen die-hards use it once they’ve posted their screenshots to /r/unixporn.

                                                                                    1. 3

                                                                                      CDE+Motif is design by committee mediocrity, it’s like the DMV designed a desktop.

                                                                                      LOL! This is spot on. Motif looked… okay but working with it involved understanding so many obtuse design-by-commitee concepts.

                                                                                      1. 3

                                                                                        NextStep was, unsurprisingly, the class of the lot. System 7 was still a superior experience. The less said about the rest of them, the better.

                                                                                        1. 1

                                                                                          Each to their own, I guess. I used CDE legitimately on Solaris desktops (we had Sun Rays in the UNIX group at the University) for probably a year. It was pretty good! It had virtual desktops, it was legible, the resource footprint was relatively modest, and it was quite snappy.

                                                                                          In the end the only thing that made me switch away was that I discovered I liked tiling window managers more, and I found dwm. If GUIs like Windows are your jam I expect dwm or i3 is even further away than CDE!

                                                                                          1. 2

                                                                                            t had virtual desktops, it was legible, the resource footprint was relatively modest, and it was quite snappy

                                                                                            No one in the 90s said this because at the time Motif was a bloated pig. Now it’s the lightweight alternative. How funny things change.

                                                                                            In the end the only thing that made me switch away was that I discovered I liked tiling window managers more, and I found dwm. If GUIs like Windows are your jam I expect dwm or i3 is even further away than CDE!

                                                                                            I think i3 is a good implementation of its concept primarily for the sane defaults. I enjoy non-Mac style interfaces (i.e. I respect Apollo’s that Plan 9 shamelessly ripped off, Interlisp, Genera, etc.), I just have it out for desktops that cost a lot of money for a worse experience (i.e. for years xterm and maybe Emacs with patches was as good as it got on X)

                                                                                            1. 1

                                                                                              I like i3 because it’s at least taking a different direction. I do think that it’s kind of amazing that, 20 years after Apple killed the Spatial Finder dead, Gnome/KDE/whatever are still trying to bring back that Windows 2000 magic. I have Thoughts about this that the margins of this post are too narrow to contain.

                                                                                              1. 1

                                                                                                Gnome tried for a bit to bring back spatial finder, but the userbase screamed at them for years until they conceded and turned spatial mode off by default. KDE has always been more aping MS than Apple. Nowadays, they don’t even try for that classic Mac feel.

                                                                                          2. 1

                                                                                            The stock desktop at my first job was CDE, on top of a Red Hat Linux derivative called Linux Pro. I’d been using Slackware for several years and used FVWM95 on my personal desktop, so I dumped that standard install after about two weeks of fighting with CDE. Obtuse is a great description.

                                                                                            I do admit some nostalgia, though. CDE looked like it meant business.

                                                                                          1. 4

                                                                                            Maybe I’m just missing it, but this writeup seems to gloss over the main event, at least as viewed through my filter bubble.

                                                                                            The problem comes back to a similar concern around the client being outdated in some way and not having the new ISRG Root X1 installed, meaning it can no longer validate certificate chains as it has no Root CA to anchor on.

                                                                                            The outdated clients (some not outdated by very much; GnuTLS was only patched in June 2020) in many cases did have ISRG Root X1 installed, but ignored it because they preferred the cross-signed version of ISRG Root X1 sent by the server. Removing the cross-signed root from one’s chain would have been enough to fix anything affected in this way. That’s what I did, figuring more people IRC from CentOS than ancient Androids, and it resolved pretty much everyone’s problems; so far I haven’t heard from anyone who needed the cross-signed cert.

                                                                                            1. 1

                                                                                              Do you have automation for that? AFAICT if I nuke part of my fullchain it’ll just come back in 90 days

                                                                                              1. 1

                                                                                                We have some custom scripts around it anyway to request all the certs centrally and distribute them, so it was easy (if not very pretty) to hack in some awk to cut out the cross-signed root right before we send the certs out. But if you use dehydrated (which I’d recommend anyway) I think you can use --preferred-chain 'ISRG Root X1' to get the non-cross-signed chain straight from LE.

                                                                                              2. 0

                                                                                                Yes, that was our (unpleasant) experience as well: updating ca-certificates was not enough, we also had to upgrade openssl and gnutls. In the end we solved the problem by switching to ZeroSSL (we have a ton of older VMs for testing and upgrading them all was not an option). The whole ordeal left quite a bad taste, I doubt we will touch Let’s Encrypt again if we can help it. And their attempt at spinning it as a good thing (“standing on our own feet”) just adds insult to injury.

                                                                                                1. 13

                                                                                                  The whole ordeal left quite a bad taste, I doubt we will touch Let’s Encrypt again if we can help it.

                                                                                                  I don’t really understand this at all. The bugs were in other software and could just as easily have been triggered by another cert, but you blame LE for it. And I don’t know what they were supposed to do differently. Were you expecting them to somehow have and get away with a perpetual non-expiring root cert?

                                                                                                  1. 4

                                                                                                    I broadly agree—the expiration was manifestly not LE’s fault—but I suspect that if they’d done more testing they might have chosen not to default to the chain with the expired cross-sign. It broke more things than anyone expected.

                                                                                                    But… I didn’t test it either. Apparently hardly anyone did. And given it’s a public benefit doing this for free, I don’t particularly feel that they owed me the testing I couldn’t be bothered to do.

                                                                                                    1. 6

                                                                                                      I guess I just see this as the rehearsal run for the expirations of older and longer-lived certs from “traditional” root CAs. We were all going to have to deal with it sooner or later, and some of the bugs and faulty assumptions turned up by this one have been kind of scary and I think it’s good to be exposing them.

                                                                                                      1. 1

                                                                                                        I mean, it seems easier to me to get vendors to push a new root CA than figure out the exact mix of cross-signing rules that won’t peeve off a diverse set of implementations to me, anyways.

                                                                                                        I’m tempted to say “fuck it, just ship a 20 year root certificate and we’ll replace it with all the other certs come 2038, and only sign 1 month certificates in case we need to revoke it”, but I suppose that isn’t security, isn’t it?

                                                                                                    2. 1

                                                                                                      The bugs were in other software and could just as easily have been triggered by another cert, but you blame LE for it.

                                                                                                      Yes, but when a non-trivial portion of the web depends on your service, saying that it’s not our bug and therefore we are going to go ahead and break things is not a good strategy, IMO.

                                                                                                      And I don’t know what they were supposed to do differently.

                                                                                                      I am not an expert (especially when it comes to cross-signing, alternative paths, etc) and I could very well be wrong here (in which case please correct me) but from their initial announcement my take is that they could have looked for a new well-trusted root (probably by going to one of the older ones and perhaps paying them some non-trivial amount of money) but they decided not to.

                                                                                                      And speaking of predictions, I did expect this to happen, I just didn’t expect it to be this bad: it’s one thing to update ca-certificates (which, at least on Debian, you can just copy from any newer version and it will install fine on any older) and another upgrading foundational libraries like libssl and libgnutls (for example, there are no fixed versions for older Debian releases).

                                                                                                1. 6

                                                                                                  I actually like Safari, so “Safari but it has uBlock Origin” is very appealing.

                                                                                                  1. 2

                                                                                                    I too use Safari, but there are things I’m not super crazy about with it. This seems like Camino, which I used to use and liked a lot, but since it’s using WebKit, probably significantly less likely to be abandoned.

                                                                                                  1. 1

                                                                                                    The AVIF authors really should’ve cancelled the project in favor of JPEG XL. Let an actual image codec take over the world.

                                                                                                    1. 4

                                                                                                      How’d that work out for JPEG2000? HEIC and AVIF actually have traction. It does help that accelerated rendering is basically cheaper with the video-derived codecs.

                                                                                                      1. 3

                                                                                                        One problem with HEIC is that it’s really slow without acceleration; a HEIC images from my iPhone take about 3 seconds to load. Browsing a directory is undoable without some thumbnail cache. Not sure why libheif isn’t using hardware acceleration: maybe it’s not implemented, or maybe it just doesn’t work on my particular machine, I couldn’t really figure it out and just accepted that it’s slow. AVIF has similar performance characteristics in my testing.

                                                                                                        Personally I’d consider that a huge downside for a generic wide-spread image format. “Your site loads really slow for me and takes up 100% CPU”, “oh, you don’t have the right kind of computer to do hardware-based HEIC decoding”.

                                                                                                        Not sure how this compares to JPEG XL since I never used it, but Wikipedia states “JPEG XL is about as fast to encode and decode as old JPEG using libjpeg-turbo and an order of magnitude faster to encode and decode compared to HEIC with x265”, so that sounds a lot better.

                                                                                                        Also, both Chromium and Firefox seem to have JPEG XL support in testing, so it seems like it’s forthcoming. JPEG 2000 was never supported in any browser AFAIK, and the biggest hurdle wasn’t technical but patents/licensing (JPEG XL is royalty-free).

                                                                                                        1. 3

                                                                                                          One problem with HEIC is that it’s really slow without acceleration

                                                                                                          I mean, that’s what JPEG was like in the early 90s - it took seconds for one to render. Now we don’t even think of them.

                                                                                                          1. 3

                                                                                                            Eh, going back to ‘90s performance is not something I’m looking forward to. Having the same performance (measured in time I need to wait) as 25 years ago is kinda silly. Having worse performance is very silly.

                                                                                                            1. 1

                                                                                                              I remember using a program for DOS/Win 3.1 that implemented lookup tables to speed up JPG rendering on my first serious computer - a 386 with 2MB RAM and no math co-processor.

                                                                                                      1. 19

                                                                                                        They do mention it in passing, but I really can’t help but feel that the approach outlined here is probably not the best option in most cases. If you are measuring your memory budget in megabytes, you should probably just not use a garbage collected language.

                                                                                                        1. 19

                                                                                                          All of the memory saved with this linker work had nothing to do with garbage collection.

                                                                                                          1. 7

                                                                                                            Sure, but that’s tangential to my point. In a gced language, doing almost anything will generate garbage. Calling standard library functions will generate garbage. This makes it difficult to have really tight control of your memory usage. If you were to use, for example, c++ (or rust if you want to be trendy) you could carefully preallocate pretty much everything, and at runtime have no dynamic allocation (or very little, and carefully bounded, depending on your problem and constraints). This would be (for my skillset, at least) a much easier way to keep memory usage down. They do mention they have a lot of go internals expertise, so maybe the tradeoff is different for them, but that seems like an uncommon scenario.

                                                                                                            1. 1

                                                                                                              I wouldn’t say that, because it’s likely that they wouldn’t have been short on memory to begin with if they hadn’t used a GC language. (And yes, I’m familiar with the pros and cons of GC; I’m writing a concurrent compacting GC right now for work.)

                                                                                                            2. 2

                                                                                                              Only maybe. Without a gc long running processes can end up with really fragmented memory. With a gc you can compact and not waste address space with dead objects.

                                                                                                              1. 18

                                                                                                                If you’re really counting megs, perhaps the better option is to forgo dynamic heap allocations entirely, like an embedded system does.

                                                                                                                1. 4

                                                                                                                  Technically yes. But they probably used this to deploy one code base for everything, instead of rewriting this only for the iOS part.

                                                                                                                  1. 2

                                                                                                                    Exactly this. You can try to do this in a gced language, and even make some progress, but you will be fighting the language.

                                                                                                                    1. -2

                                                                                                                      You should probably write it all in assembly language too.

                                                                                                                      1. 7

                                                                                                                        I feel like you’re being sarcastic, but making most of the app not do dynamic applications is not a crazy or extreme idea. It’s not super common in phone apps and the system API itself may force some allocations. But doing 90+% of work in statically allocated memory and indexed arenas is a valid path here.

                                                                                                                        Of course that would require a different language than Go, which they have good reasons not to do.

                                                                                                                        1. 1

                                                                                                                          I’m being sarcastic. But one of the issues identified in the article is that different tailnets have different sizes and topologies - they rejected the idea of limiting the size of networks that would work with iOS which is what they’d need to do if they wanted to do everything statically allocated.

                                                                                                                          1. 2

                                                                                                                            they rejected the idea of limiting the size of networks

                                                                                                                            They’re already limited. They can’t use more than the allowed memory, so the difference is - does the app tell you that you reached the limit, or does it get silently killed.

                                                                                                                            I believe that fragment was related to “how other team would solve it keeping other things the same” (i.e. keeping go). Preallocation/arenas requires going away from go, so it would give them more possible connections not less.

                                                                                                                    2. 10

                                                                                                                      That is absolutely not my experience with garbage collectors.

                                                                                                                      Few are compacting/moving, and even fewer are designed to operate well in low-memory environments[1]. golang’s collector is none of that.

                                                                                                                      On the other hand, it is usually trivial to avoid wasting address space in languages without garbage collectors, and a application-specific memory management scheme typically gives 2-20x performance boost in a busy application. I would think this absolutely worth the limitations in an application like this.

                                                                                                                      [1]: not that I think 15mb is terribly low-memory. If you can syscall 500 times a second, that equates to about 2.5gb/sec transfer filling the whole thing - a speed which far exceeds the current (and likely next two) generations of iOS devices.

                                                                                                                      1. 4

                                                                                                                        To back up what you’re saying, this presentation on the future direction that the Golang team are aiming to take is worth reading. https://go.dev/blog/ismmkeynote

                                                                                                                        At the end of that presentation there’s some tea-leaf reading about the likely direction that hardware development is likely to go in. Golang’s designers are betting on DRAM capacity improving in future faster than bandwidth improvements and MUCH faster than latency improvements.

                                                                                                                        Based on their predictions about what hardware will look like in future, they’re deliberately trading off higher total RAM usage in order to get good throughput and very low pause times (and they expect to move further in that direction in future).

                                                                                                                        One nitpick:

                                                                                                                        Few are compacting/moving,

                                                                                                                        Unless my memory is wildly wrong, Haskell’s generation 1 collector is copying, and I’m led to understand it’s pretty common for the youngest generation in a generational GC to be copying (which implies compaction) even if the later ones aren’t.

                                                                                                                        I believe historically a lot of functional programming languages have tended to have copying GCs.

                                                                                                                        1. 2

                                                                                                                          At the end of that presentation there’s some tea-leaf reading about the likely direction that hardware development is likely to go in. Golang’s designers are betting on DRAM capacity improving in future faster than bandwidth improvements and MUCH faster than latency improvements.

                                                                                                                          Given the unprecedented semiconductor shortages, as well as crypto’s market influence slowly spreading out of the GPU space, that seems a risky bet to me.

                                                                                                                          1. 1

                                                                                                                            That’s the short term, but it’s not super relevant either way. They’re betting on the ratios between these quantities changing, not on the exact rate at which they change. If overall price goes down slower than desired, that doesn’t really have any bearing.

                                                                                                                        2. 1

                                                                                                                          Aren’t most GCs compacting and moving?

                                                                                                                          The first multi-user system I used heavily was a SunOS 4.1.3 system with 16MB of RAM. It was responsive with a dozen users so long as they weren’t all running Emacs. Emacs, written in a garbage collected, interpreted language would have run well on a much smaller system if there was only one user.

                                                                                                                          The first OS I worked on ran in 16MB of RAM and ran a Java VM and that worked well.

                                                                                                                        3. 1

                                                                                                                          Any non-moving allocator is vulnerable to fragmentation from adversarial workloads (see Robson bounds), but modern size-class slab allocators (“segregated storage” in the classical allocation literature) typically keep fragmentation quite minimal on real-world workloads. (But see a fascinating alternative to compaction for libc-compatible allocators: https://github.com/plasma-umass/Mesh.)

                                                                                                                        4. 1

                                                                                                                          This does strike me as a place where refcounting might be a better approach, if you’re going to have any dynamic memory at all.

                                                                                                                          1. 1

                                                                                                                            With ref-counting you have problems with cycles and memory fragmentation. The short-term memory consumption is typically lower with ref-counting than a compacting GC, but the are many more opportunities to have leaks and grow over time. For a long-running process I’m skeptical that ref-counting is a sound choice.

                                                                                                                            1. 1

                                                                                                                              Right. I was thinking that for this kind of problem with sharply limited space available you’d avoid the cycles problem by defining your structs so there’s no void* and the types form a DAG.

                                                                                                                          2. 1

                                                                                                                            Edit: reverting unfriendly comment of dubious value.

                                                                                                                          1. 5

                                                                                                                            Your last seven submissions were:

                                                                                                                            • xmake v2.5.8
                                                                                                                            • xmake v2.5.7
                                                                                                                            • xmake v2.5.6
                                                                                                                            • xmake v2.5.5
                                                                                                                            • xmake v2.5.4
                                                                                                                            • C/C++ build system, I use xmake
                                                                                                                            • xmake v2.5.3

                                                                                                                            Stop spamming.

                                                                                                                            1. 2

                                                                                                                              Someday, I learned Lobste.rs is not very fond of people submitting mostly their own work (a.k.a. self-promoting posts). So I got curious and went to see the author’s submissions.

                                                                                                                              Well, out of 89 submissions, 87 are “authored by ruki”, one is not marked but is from their personal blog (therefore, 88 self-authored submissions) and one submission, three years ago, which isn’t self-promoting.

                                                                                                                              1. 4

                                                                                                                                There are a few people who mostly submit their own stuff (@soatok comes to mind) that the community is fine with, because it’s consistently engaging and high quality. But even then I think it’s good etiquette to also submit and comment on other stories. I try to keep the ratio of my stuff : other stuff below 1:4.

                                                                                                                                I think at one point @pushcx estimated that frontpaging on Lobsters drives as much traffic as tens of thousands in marketing budget.

                                                                                                                                1. 3

                                                                                                                                  What chafes me it’s basically very minor releases. If it’s a big headlining release, OK (assuming you also submit other stuff…). If it’s chump change, why bother except for the clicks?

                                                                                                                                  1. 2

                                                                                                                                    I don’t think this is a small version change. I have a lot of new features and improvements for each version, and I also introduce them in detail in the article.

                                                                                                                                    In addition, I only submit an update every few months, and it is not very frequent. And I see that other shorter versions of articles are allowed. Why my article is considered spam, I am very confused, such as this one, https://lobste.rs/s/hjge7k/python_release_3_10_0 https://lobste.rs/s/qj806e/zig_0_8_0_release_notes https://lobste.rs/s/prutnh/zig_v0_7_0_released https://lobste.rs/s/oln4mx/llvm_13_released

                                                                                                                                    Well, if you are bored with this, I will not submit them in the future, I am very sorry.

                                                                                                                                    1. 3

                                                                                                                                      I don’t think this is a small version change. I have a lot of new features and improvements for each version, and I also introduce them in detail in the article.

                                                                                                                                      No doubt it seems that way to you, as you are the software author. But to us the changes are minor, sorry :(

                                                                                                                                      In addition, I only submit an update every few months, and it is not very frequent. And I see that other shorter versions of articles are allowed. Why my article is considered spam, I am very confused, such as this one, https://lobste.rs/s/hjge7k/python_release_3_10_0 https://lobste.rs/s/qj806e/zig_0_8_0_release_notes https://lobste.rs/s/prutnh/zig_v0_7_0_released https://lobste.rs/s/oln4mx/llvm_13_released

                                                                                                                                      Thing is, those who submit those posts don’t submit only those release notes. If someone came along from the Python project and submitted a story for every single minor release Python ever did, and only submitted those, well, it wouldn’t be very nice, even if those versions of python had some great features.

                                                                                                                                      Well, if you are bored with this, I will not submit them in the future, I am very sorry.

                                                                                                                                      I wouldn’t mind the occasional post detailing something novel about xmake, or the very occasional major release notes. But it would be nice if you participated in other ways as well.

                                                                                                                            1. 2

                                                                                                                              I’ve slightly altered the title because it’s not obvious it’s actually an interview.

                                                                                                                              1. 10

                                                                                                                                All this because Mozilla leadership still haven’t set up Firefox to take community funding directly, and instead want to use people’s donations on their irrelevant projects.

                                                                                                                                1. 3

                                                                                                                                  As I understand Mozilla’s legal structure, you cannot at present give money to Firefox at all.

                                                                                                                                  Donations given to the foundation cannot be passed to the corporation. The irrelevant projects you mention (and there are a lot of them) come out of the Firefox profits so are eating the seed corn directly. I seem to recall off-hand that a lot of the donation money goes on grants to external organisations.

                                                                                                                                  1. 2

                                                                                                                                    And how many people would actually give Firefox money directly?

                                                                                                                                    1. 6

                                                                                                                                      I’d give them $1/mo for sure. Maybe more, depending on what they did with it.

                                                                                                                                      1. 5

                                                                                                                                        maybe if you could specifically give money to fund the useful parts like FTP and RSS support, and ALSA

                                                                                                                                        1. 3

                                                                                                                                          I’ve donated as much as $75/mo to neovim. I don’t donate as much nowadays but if I could donate to a specific dev working on furthering my interests in firefox, I would.

                                                                                                                                          I wonder if something like Igalia’s open prioritization would work for Firefox itself.

                                                                                                                                          1. 2

                                                                                                                                            We won’t know until they try. But for some points of reference: bcachefs which is still an out-of-tree alpha level project gets 2k/mth, WhatsApp in 2013-14 charging a dollar/yr (easily avoidable) was decently profitable, Wikipedia gets lots of donations annually even though they don’t really need it, neovim gets probably $50k/yr between various funding methods and neovim is relatively obscure. You can still ask for money on the internet and get a decent sum. With enough users like FF, they could definitely give it a go.

                                                                                                                                        1. 1

                                                                                                                                          I’ve heard sysprep is super janky w/ modern Windows. Has it been deprecated and replaced with something else yet?

                                                                                                                                          1. 4

                                                                                                                                            not that i know of, and yes, its really really painful to create a good working sysprepped images. Lately i was looking into building Windows 11 Vagrant images for deployment on libvirt, which, was another kind of fun:

                                                                                                                                            • windows 11 refuses to install without UEFI/secureboot
                                                                                                                                            • windows 11 refuses to install without working TPM module

                                                                                                                                            After working around all of that stuff by making packer pass a tpm emulation device (swtpm) to qemu and make it use tianocore uefi bios, after hours, i had an automated install going which failed during sysprep phase, because a OneDrive Appx package was unable to be uninstalled and some error messages followed where no exact reason was to be found. I went on and removed the mentioned package manually and then sysprep finally worked.

                                                                                                                                            All in all it took me about a day to get a working image, and i wont touch that image.. ever.. again (until it breaks, for some reason)

                                                                                                                                          1. 1

                                                                                                                                            FWIW, if you’re looking for tooling (I actually am, since I run a private BGP mesh), Julia Evans wrote about that recently.

                                                                                                                                            1. 1

                                                                                                                                              The jab at Apple not supporting old devices is extremely misplaced.

                                                                                                                                              You can jab at Apple for a lot, overpriced for the spec, really cringy marketing, unification and walled-garden-ness of the devices.

                                                                                                                                              But I’m hard pressed to think of iOS, MacOS or even Apple as pushing people to upgrade rapidly.

                                                                                                                                              iPhones get updates for roughly 5 years (where it’s contemporary standardised on 2) and the latest MacOS release officially supports laptops from 9years ago.

                                                                                                                                              1. 2

                                                                                                                                                The latest Windows (before 11), and the latest Linux distros, support machines from long before 2012. Machines from around 2010 weren’t actually bad. They’re perfectly fine now, maybe with a new SSD to replace the HDD. I think the jab at Apple is perfectly warranted; they’re worse at supporting old devices than the competition.

                                                                                                                                                1. 2

                                                                                                                                                  I semi-disagree. I’ve seen 2 i3s from ~2010 now (with enough RAM) where a fresh install of Win10 simply doesn’t work well enough to properly do anything without resorting to insults. CPU spins up for no apparent reason, everything takes ages to load (not an SSD, but also no real read/write numbers according to procman). It’s simply on the cusp of being unusable. But yeah, I’d say ~2012-13 or an i5 and it worked fine. (SSD usually is the difference between “a bit slow” and “I can’t believe this system is 10y old”)

                                                                                                                                                  1. 3

                                                                                                                                                    Sounds like you agree, as long as the laptop from 2010 isn’t using an i3? i3/i7 laptops from 2010 are still pretty good. I agree obviously that the laptops which were slow already in 2010 are probably too slow for 2021 for normal laptop use cases.

                                                                                                                                                  2. 1

                                                                                                                                                    Windows 10 is also 6.4 years old at this point. The closest equivelant MacOS version was El Capitan which supported “everything that can run Mountain Lion” which itself officially supported MacBooks from 2008, that’s 7 years for the weakest device in the line.

                                                                                                                                                    However, El Capitan is no longer supported as of 2019, so- you could have an 11 year old laptop which was still officially supported, or a 13 year old desktop with support (the extreme case).

                                                                                                                                                    Linux is a special beast all unto itself, but I’m pretty sure the mainstream distros are not supporting hardware that’s 11 years old with the default desktops.

                                                                                                                                                    My enthusiast grade laptop from that period had 2G of RAM and a dual core “2.0GHz” CPU with a pitifully small L1 cache (which would not fit modern GPT partition tables) and a shamefully inadequate IPC for modern workloads. I’m sure even running the background processes on a modern GNOME desktop would kill it.

                                                                                                                                                    https://www.notebookcheck.net/AMD-Turion-64-X2-TL-60-Notebook-Processor.39265.0.html

                                                                                                                                                    1. 2

                                                                                                                                                      All of this is unnecessary erasure of fully working platforms.

                                                                                                                                                      Our digital music teaching rooms are powered by AMD Phenom II x6 1045T and Intel Core2Quad Q9550, running Win10. Multiple cameras and instruments are combined via OBS and output 1080p streams to Skype, without ever dropping a single frame.

                                                                                                                                                      Going a generation younger, AMD FX processors are still around a bunch of family members and friends. 8-Core 4GHz processors working wonders to edit even 4k video and the like.

                                                                                                                                                      On the extreme side, my FreeBSD Laptops run a QX9300 and P8800. YouTube 1080p60 runs without dropping frames, for writing office documents and programming both are more than adequate.

                                                                                                                                                      I don’t expect Microsoft to tailor their experience to my platforms, but these computers still pull their weight for workloads they are intended and those workloads did not magically get harder to run and 1080p video won’t suddenly become harder to run after 2025. Declaring these platforms dead after 2025 by dropping Win10 support and locking them out of Win11 is wasteful to say the least.

                                                                                                                                                      1. 2

                                                                                                                                                        Youtube 1080p60 is quite frankly impossible on FreeBSD on a machine from that era due to the limited/force_disabled nature of hardware acceleration in modern web-browsers on non-Windows/Mac platforms, the advent of new codecs which are not supported by your GPU or iGPU (since that CPU doesn’t have one) and the software render performance being somewhere between dogshit and awful on even very powerful computers from 2017.

                                                                                                                                                        Are you lying to make a point? or do you have some magic that I’m not aware of?

                                                                                                                                                        Personally I run a Xeon 1505Mv6 which has x264 hardware decoding but Linux still renders youtube on CPU causing 1080p60 video to bog down an entire core, not sure it skips frames, but this is not a weak CPU; in fact it matches your CPU in TDP: https://www.cpu-world.com/Compare/742/Intel_Core_2_Extreme_Mobile_QX9300_vs_Intel_Xeon_E3-1505M_v6.html

                                                                                                                                                        Your broader point about workloads not getting harder completely ignores Spectre and Meltdown, but ok, however it’s also the case that successive software updates assume more about the performance of your computer.

                                                                                                                                                        The same task you could do in 1995 on a 1995 computer takes many orders of magnitude more power to do now; a fantastic example is MSN vs Skype vs Teams; where the functionality hasn’t changed but it’s still able to absolutely crush my machine which is somewhere in the order of 20x more powerful than the machine I ran MSN on.

                                                                                                                                                        1. 1

                                                                                                                                                          It’s almost certainly GPU accelerating the video encode/decode. Those were high-end desktop platforms for the time and you can easily slap in a new modern GPU.

                                                                                                                                                          1. 1

                                                                                                                                                            But it’s not supported in firefox or chrome unless you’re on windows or macos

                                                                                                                                                            1. 1

                                                                                                                                                              And they’re running Windows.

                                                                                                                                                              1. 1

                                                                                                                                                                Then we’re talking passed each other a bit.

                                                                                                                                                                The parent said that he gets longer life out of BSD when my experience can be the opposite. (Due to browser support limitations, mostly)

                                                                                                                                                          2. 1

                                                                                                                                                            I’m honestly confused where you get this impression from. On a 1080p screen, my T500 has no problem with 1080p60 content and my x200 on it’s 1280x800 monitor. I can totally whip out a camera to show. Of course the GMA 4500 MHD does not support hardware decoding. But using MPV / YouTube-dl the P8800 in my x200 has no problem playing back 1080p60 videos and since libdav1d, my QX9300 also properly handles AV1 playback.

                                                                                                                                                            1. 1

                                                                                                                                                              Then you’re not really watching YouTube. You’re downloading a video and playing it back (you can probably argue the point that this is what YouTube’s website does anyway) but the broader point I was making is that those hacks are necessary because Linux/BSD are not well supported.

                                                                                                                                                              You might get away with a 10 year old Windows machine but you have to sometimes hack around Linux- because the browser support isn’t there.

                                                                                                                                                              I’m basically nerd sniping at this point though, I was just dumbfounded by the repeated assertions that “everything is fine” when my new systems don’t perform as well as I should expect.

                                                                                                                                                              1. 2

                                                                                                                                                                Then you’re not really watching YouTube.

                                                                                                                                                                I mean I click a video in my subscription box and a Firefox Plugin automatically opens MPV. It’s pretty seamless.

                                                                                                                                                                But I understand your point now. The default case is not supported any more by these older platforms. With that I totally agree, they don’t have to be. That’s why I wrote:

                                                                                                                                                                I don’t expect Microsoft to tailor their experience to my platforms

                                                                                                                                                                …because that’s my job as a user wishing to continue the use of these machines. A machine of that age has to be setup for it’s designed workload. My point is it can be setup for that workload: programming, office documents and media consumption in my case. Declaring these platforms dead is what I have a problem with, as long as this is possible. Microsoft’s desire to make even CPUs as recent as a FX-8370 obsolete by 2025 is simply a crime.

                                                                                                                                                                1. 2

                                                                                                                                                                  I agree with your sentiment entirely.

                                                                                                                                                        2. 1

                                                                                                                                                          “The extreme case” is kind of on point. It’s easy to find examples of favorable or unfavorable support from any manufacturer. Those of us here have lived experience though, which tends towards an average.

                                                                                                                                                          I’ve owned four Macs. Here’s how it went:

                                                                                                                                                          • 2001 iMac G3 could be upgraded to 10.3 (2003.) Officially it supports 10.4 (2005) but that’s a bit disingenuous since it shipped on DVD and the device didn’t have a DVD drive.
                                                                                                                                                          • 2005 iMac G5 could be upgraded to 10.5 (2007.)
                                                                                                                                                          • 2007 MacBook could be upgraded to 10.7 (2011.) This one hurts the most because the hardware is still so capable, even now.
                                                                                                                                                          • 2017 MacBook Air is still supported.

                                                                                                                                                          I think it’s fair to say that Apple’s support is gradually lengthening, so it’s possible people who haven’t followed them for a long time have a more favorable impression than those of us who bought G5s. That said, and this really applies to any manufacturer, encouraging upgrades that remove or reduce functionality stretches the definition of support. That 2007 MacBook really ended on 10.6 along with Rosetta, and currently the 2017 device runs Mojave for 32 bit applications.

                                                                                                                                                    1. 4

                                                                                                                                                      I post this and now Windows 11 is suddenly released. Sorry for jinxing us all.

                                                                                                                                                      1. 4

                                                                                                                                                        Swift, notably, always uses explicit wrapping.

                                                                                                                                                        Int.max + 1 will trap, whereas Int.max &+ 1 will wrap.

                                                                                                                                                        1. 2

                                                                                                                                                          Trapping is fine in a desktop application, but in any kind of OS or server use case fail-stop behaviour is far from ideal. It’s much better than introducing most kinds of security vulnerability but it’s still a denial of service vulnerability.

                                                                                                                                                          Smalltalk copied Lisp’s implementation for big integers. Small integers are stored in a machine word and are one bit smaller than the machine word. On overflow, you promote to a big integer and store the pointer in the word. On modern hardware, the best encoding for this is to make the low bit 0 if it’s a small int (so most arithmetic just requires shifting one operand and not masking the other) and 1 for pointers (because immediate addressing lets you subtract one in a load / store instruction). You can optimise sequences of operations by just collecting the carry flag and redoing if any of them overflowed. This is very efficient on every vaguely modern ISA except RISC-V.

                                                                                                                                                          The reason that low-level languages don’t like this is that it means that any arithmetic operation can cause memory allocation (and require deallocation). That’s a terrible idea in C. In C++ you could implement it as a separate type and then at least you only had to know that Integer might allocate on operations. Even then, it’s a bit annoying when you interop with other code because you have to handle the overflow case (or, rather, the myInteger > sizeof(T) case).

                                                                                                                                                          1. 6

                                                                                                                                                            There are also function like Int.addingReportingOverflow(value: Int) which returns both the added value and a boolean whether overflowing occurred, etc.

                                                                                                                                                            I agree that arbitrarily big integers are usually better, and definitely in non-low level languages, and it’s strange that it isn’t the default in those languages.

                                                                                                                                                            1. 4

                                                                                                                                                              Considering how common the carry bit is in a lot of architectures, it’s surprising a lot of languages never exposed it or an idiomatic wrapper for it.

                                                                                                                                                              1. 3

                                                                                                                                                                Generally, I believe, it’s for one of two reasons:

                                                                                                                                                                • Low-level languages don’t want to limit portability by exposing it (RISC-V, for example, doesn’t have any equivalent of the carry flag and so you need a fairly costly sequence. MIPS doesn’t either, though MIPSr6 added an instruction that just calculated the carry flag).
                                                                                                                                                                • High-level languages don’t want to expose low-level details of the machine to programmers because abstracting away low-level details is one of the main goals of a high-level language.

                                                                                                                                                                I believe Pony and Rust both have support in the standard integer types for this and some C compilers expose it as an intrinsic. This is generally common on new languages using LLVM as the back end, because LLVM IR has overflow-checked intrinsics and so it’s trivial for any language using LLVM to expose it and let LLVM worry about how to codegen it for any given target.

                                                                                                                                                            2. 1

                                                                                                                                                              most arithmetic just requires shifting one operand and not masking the other

                                                                                                                                                              More likely, you shift the result. This means you do catch some extraneous overflow, but means you can reuse the inputs.

                                                                                                                                                            3. 1