1. 6

    I’m grateful to still be running the BSD dream. I absolutely love the simplicity of plain-text config files.

    1.  

      I don’t see how the config files are related? It just seems like all the daemons that handle DNS lookups all suck except for systemd-resolved which sucks the least and is closest to feature parity to either Windows or Mac resolvers.

      1. 5

        The article mentions how /etc/resolv.conf comes from BSDlandia. Some of us still run a BSD derivative, even on our laptops. :)

        So all I have to do is edit /etc/resolv.conf and be done with it.

        1. 15

          I feel like you completely missed the point of the article. The thing we wrote about was when software on the computer (like Tailscale) has alternate opinions on what the DNS config should be.

          1.  

            Gotcha. Well, there’s only two applications I permit to modify /etc/resolv.conf: dhclient and IPv6 SLAAC. I don’t permit modification of /etc/resolv.conf from any other application, even OpenVPN. If I wanted to disallow modifications to resolv.conf by dhclient or SLAAC, it’s not much more than editing one or two more config files.

            There was a time when I didn’t even allow those two from modifying resolv.conf. I used to run unbound directly on my laptop. But now that my home network blocks outbound DNS except through my actual DNS server, I don’t do that anymore.

            Either way, it’s really not nearly as complicated as Tailscale’s image shows for Linux.

            1. 8

              I feel like there’s just a lot of useful stuff happening behind the scenes which those of us who come from too Unix-y a background can’t quite grasp at times.

              Because, I mean, I’ve read a bunch of articles about what systemd-resolved does and why it’s useful and necessary and whatever and as far as I can recall those arguments made sense…

              …but I admit the whole thing is utterly incomprehensible to me at this point, and that when I see # Generated by resolvconf and especially #This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8). I just know that what’s going to follow is four hours of trying to fix the network today, usually followed by another half hour or so after the next reboot, and I’ll eventually get it working but I have no idea how.

              Oh, how the wheel has turned, I am now one of those lusers whom, twenty years ago, I was flaming because they couldn’t get their Linux boxes to speak PPP. Linux is user-friendly, it’s just picky about its users, indeed :-).

              1.  

                The traditional UNIX use-case is that a machine sits in a rack or on or under a desk, and the network doesn’t change until something rare happens, on the order of months or years. Obviously you have two DNS caching/resolving/authoritative servers, both of them go in your /etc/resolv.conf, and all is well. If you need private domain name space, you just make sure your DNS servers take care of it.

                The house network case is similar, except that maybe you don’t run your own DNS server.

                But then you get into laptops. If you don’t use a VPN, then you expect dhclient to get you an address and a default route and a couple of DNS servers; this will overwrite your resolv.conf but you don’t care much.

                Finally, the complex case: you have a laptop that’s moving between networks and also using one or more VPNs to either be at home (route everything via VPN) or be partially at home (VPN adds routes for internal IP space). This is the one where everything falls down, because what we really want has to be determined by the responsible human, not by automation.

                In my ideal world, we have a resolv.conf.d/* structure with a couple of new features: include and scope.

                /etc/resolv.conf: include /etc/resolv.conf.d/*.on

                and then in /etc/resolv.conf.d, files with each of those names that specify nameservers and search domains and a new feature, scope:

                dhcp.on scope . nameserver …

                dhcp6.on scope . nameserver…

                slaac.off scope . nameserver…

                wireguard-house.on scope .local nameserver …

                tailscale-access.off scope .company.tld nameserver …

                tailscale-override.on scope override nameserver …

                If we did this, then:

                • the base case remains the same

                • various daemons and services would control their own subentries in /etc/resolv.conf.d, presumably with conflict limited to misconfiguration. Only files named *.on are active; a person or daemon who wants the config to stick around but not be used can change the name to *.off (or anything else).

                • scope means that it’s much easier to add-in DNS for private systems, including multiple private systems. Scopes are consulted from most specific match to least (.) except for override, which demands to be the only stanza in use. There is no sensible situation in which multiple override stanzas make sense, so most recent timestamp wins and an error gets sent to the log. Multiple ‘scope .’ stanzas do make sense, so they get consulted in most-recent timestamp order, failing to the next as necessary.

                This also makes it convenient to bootstrap from a generic server to DNS-over-HTTPS or DNS-over-TLS or other schemes that might arise in future.

                If you spot obvious flaws other than “this requires getting changes into glibc”, please let me know.

          2.  

            most people don’t want to manage their dns servers by hand, that’s what DHCP is for

            1.  

              Driving Home Cat’s Podiatrist?

              1. 7

                Nope, it stands for Device Hopefully Can Ping!

      1. 6

        I strongly recommend the Clang Undefined Behavior Sanitizer — it adds runtime checks to your code that detect and flag (among many other things) integer overflows. I always enable this in my dev/debug builds.

        Some of the weirder rules only apply to long-obsolete hardware. I think you hav to go back to the 1960s to find CPUs with 36- or 18-bit words, or that don’t use 2s-complement arithmetic, and to the 1970s to find ones whose memory isn’t byte-addressable (i.e. “char” is bigger than 1 byte.) And EBCDIC never made it out of IBM mainframes.

        Bu I guess if you ignore those factors you’ll get outraged bug reports from the retro-computing folks complaining that your code breaks on OS\360, TENEX or TOPS-20…

        1. 6

          Some of the weirder rules only apply to long-obsolete hardware. I think you hav to go back to the 1960s to find CPUs with 36- or 18-bit words, or that don’t use 2s-complement arithmetic, and to the 1970s to find ones whose memory isn’t byte-addressable (i.e. “char” is bigger than 1 byte.) And EBCDIC never made it out of IBM mainframes.

          You need to look no further than the SHARC, still one of the most popular lines of DSP lines at the moment, to find an architecture where sizeof(char) = 4.

          (Edit: FWIW, I think a better approach to these would be to make the standard stricter, even if that means more non-compliant implementations for “special” architectures like the SHARC. I’m pretty sure AD’s C compiler isn’t (or wasn’t, thankfully for my mental sanity I haven’t touched it in like 5 years) standards-compliant anyway because sizeof(int) is also 4. We’re at a stage where you expect inconsistencies and bugs in vendor-supplied compilers anyway. There’s no need for anyone else to suffer just so a bunch of vendors with really particular requirements to be able to claim compliance, when most of them don’t care about it anyway.)

          1. 5

            You need to look no further than the SHARC, still one of the most popular lines of DSP lines at the moment, to find an architecture where sizeof(char) = 4.

            Indeed. It’s the little, low-level, specialized devices where all the weirdness shows up. Basically, if it’s not a 32-bit device there is probably something the violates your expectations. Don’t assume the world is made up of the computers you’re used to using.

            1. 4

              Ok, mind blown! I did not know that. But I can see how a DSP platform wouldn’t see byte-addressibility as necessary.

              1. 1

                Wait, isn’t sizeof(char) = 1 by definition? I suspect that what you meant to say is that for the C implementation that runs on Analog Devices’ SHARC DSP, char is 32 bits wide, int is also 32 bits wide, and actually sizeof(int) = 1.

                1. 1

                  You may be right, I don’t have the compiler at hand anymore. I’m sure (it caused me a lot of headaches when porting some code from elsewhere) that sizeof(char) and sizeof(int) were equal but I really don’t remember which of these two puzzling results it yielded.

              2. 3

                And EBCDIC never made it out of IBM mainframes.

                This is true, but there’s still a lot of code running (and being maintained!) on mainframes, so it can’t be ignored.

                1. 1

                  Didn’t ICL mainframes also use EBCDIC?

              1. 20

                Here’s what I’m doing to adjust to the new era of dystopian surveillance capitalism:

                • Replaced my old MacBook Air with a Thinkpad T14 running Linux (currently Fedora, which has less spyware and advertising than Ubuntu)
                • Firefox + UBlock Origin is my primary web browser. Configured so it mostly doesn’t “phone home” to Mozilla.
                • Ungoogled Chromium (from the flatpak store at flatpak.org) is my backup browser, for web sites where Firefox has issues. Guaranteed never to phone home to Google.
                1. 6

                  I’m interested in why you installed “ungoogled chromium” from the flatpack store?

                  I personally install it from RPM fusion. (Which you might wanna install if you want to watch any video/listen to any music on fedora)

                  $ sudo dnf info chromium-browser-privacy
                  Installed Packages
                  Name         : chromium-browser-privacy
                  Version      : 88.0.4324.150
                  […]
                  Source       : chromium-browser-privacy-88.0.4324.150-1.fc33.src.rpm
                  Repository   : @System
                  From repo    : rpmfusion-free-updates
                  Summary      : Chromium, sans integration with Google
                  URL          : https://github.com/Eloston/ungoogled-chromium
                  License      : BSD and LGPLv2+ and ASL 2.0 and IJG and MIT and GPLv2+ and ISC
                               : and OpenSSL and (MPLv1.1 or GPLv2 or LGPLv2)
                  Description  : chromium-browser-privacy is a distribution of ungoogled-chromium.
                  […]
                  
                  1. 3

                    No good reason, I think it was recommended as an installation method by the blog post where i read about the browser. Thanks for the information. I am still getting used to Fedora.

                    1. 2

                      What kind of sandboxing does the flatpak-ed package get you? It’s a useful point to remember – a while back (I’m not on Linux anymore so I don’t have a more recent data point) a lot of applications from flathub were packaged without much sandboxing at all, e.g. they still had full access to the user’s home folder.

                      1. 2

                        Fedora has an “app store” GUI called Software. It is far more user friendly than using the “dnf” command in bash, at least if you are coming from MacOS. On my laptop, since I installed it, UnGoogled Chromium shows up as an installed application in Software, together with a lot of useful information, including an indication that it is sandboxed, with the following permissions: Network, Devices, Home Folder, Legacy Display System.

                        1. 1

                          Oh, thanks! I couldn’t find an explanation of what the “friendly” names mean but assuming the most obvious mapping to Flatpak permissions (here) I think it would go something like this:

                          • Home Folder means it has unrestricted access to the home folder (which is slightly better than --filesystem=host but, as XKCD famously put it, not that good…)
                          • Devices means it has unrestricted access to things like webcams
                          • I’ve no idea what Legacy Display System maps to – presumably either --socket=x11 or --socket=fallback-x11?
                          • Network is obvious, I guess :-)

                          This is actually a little better than I expected, I think?

                        2. 1

                          This page is a little clickbait-y but still somewhat true: https://flatkill.org/2020/

                          Long story short, yes isolation is still an issue on flatpak

                    2. 4

                      Can you clarify the first point of replacing MacBook and its impact on privacy as you see it?

                      1. 31

                        MacOS has telemetry that cannot be disabled. You cannot modify the System folder. Apple wants to be an intermediary in everything you do, they want to see all your data. You are encouraged to store your data on the Apple cloud, which is not end-to-end encrypted, so that they can hand your data over to the government without your knowledge(*). You are encouraged to download apps from Apple’s app store, and even if you don’t, MacOS phones home about apps not installed from the store. I don’t want to use these services, but the UI has built in advertising for these unwanted services that I can’t disable.

                        (*) https://www.theverge.com/2020/1/21/21075033/apple-icloud-end-to-end-encryption-scrapped-fbi-reuters-report

                        Apple has been very successful at branding themselves as pro privacy. A lot of people believe their bullshit. Here’s an experiment that you can try. Go to an apple store and buy something using cash (so that Apple doesn’t know your identity). When they ask for your email address, refuse to give it to them. See how that goes for you. My experience is that they try to inflict as much pain as possible, but with negotiations, it is possible to leave the store with your merchandise and a receipt. But it is not easy. I try to use cash for everything (although I’ve made exceptions during the pandemic), and the apple store has by far the worst experience.

                        We live in an age of anxiety, where there is an ever increasing number of things that you are supposed to be anxious about. The pandemic, of course, but now that we are getting vaccinated, instead of that being a reason to be less anxious, you are now supposed to be anxious about getting and protecting your vaccine passport, without which you will be denied access to services. And of course we are supposed to be anxious about surveillance capitalism. This all sucks. I want to minimize the number of things in my life that generate anxiety: deal with the problem once, then stop thinking about it. The rational thing is to get rid of all my computers and phones, and unplug from the internet. I’m not ready for that yet, so I’m replacing my gear with new gear that doesn’t surveil me. Hopefully that will allow me to stop thinking about those particular issues.

                        1. 12

                          Great answer, especially this parts resonates with me:

                          I want to minimize the number of things in my life that generate anxiety

                          1. 15

                            I recently got sent a mac by my employer for compliance reasons, and the process of setting it up was quite a trip. I felt like I spent twenty minutes answering “no” to various forms of “OK but can we collect this piece of personal information? How about if we phrase it slightly differently?” before I could even use the machine at all.

                            In the end they refused to take no for an answer re: my mobile phone number, and after an experience like that I don’t actually have much confidence that they take my consent very seriously for the other pieces of information that I did not agree to.

                            Luckily in my case the compliance concerns can be addressed by simply doing my development inside a virtualbox VM running on that machine over SSH.

                          2. 8

                            You are encouraged to store your data on the Apple cloud[…] You are encouraged to download apps from Apple’s app store, […] Apple has been very successful at branding themselves as pro privacy. A lot of people believe their bullshit.

                            Also, you are encouraged to buy into the non-Mac hardware ecosystem (iPhone, Watch, etc.) with their own app store “soft” lock-in (using Things/OmniFocus on Mac? Why not buy the iPhone version!?).

                            Technically, one can use a Mac and avoid the rest of Apple’s ecosystem (by running Chrome, Thunderbird, open source apps, etc.) - but most people will eventually get sucked into Apple’s marketing vortex. I know because I did; which is why I avoid touching anything Apple with a ten foot pole.

                            1. 7

                              This is every business’ strategy. One man’s lock in is another man’s products that work together well.

                              1. 2

                                Does only sound like purchase realization when you’ve locked yourself into that ecosystem.

                                1. 1

                                  realization

                                  Can’t edit anymore, but that was meant to be rationalization.

                            2. 13

                              if you don’t like the telemetry done by MacOS, that’s totally fine, but there is no need for the hyperboles, like “they try to inflict as much pain as possible”. them knowing your email address is better for their business. of course, it is worse for your privacy. but it’s just a business decision that you can dislike, not them trying to inflict you pain like some james bond villain with a lake with sharks :-)

                              also, in general, you will have to trust the company that makes your operating system. not because they are trustworthy, but because if they were evil, they could just read everything you do on your computer and you would never know. so simply pick one that you can trust the most. (and it applies to linux distros too. i don’t think anyone is reading and understanding every fedora patch).

                              1. 13

                                not them trying to inflict you pain like some james bond villain with a lake with sharks

                                It’s a figure of speech

                                you will have to trust the company that makes your operating system

                                A company doesn’t make my operating system, but even if one did it’s open source, which MacOS is not

                                1. 1

                                  Shell and coca cola are exemplars of making the world a better place.

                                  Mind explaining? Was this an irony?

                                  1. 1

                                    I think you replied to the wrong comment.

                                2. 1

                                  james bond villain

                                  I think this reasoning is problematic and completely ignores wolves in sheep’s clothing. How many James bond villains have ever really existed ? We agree that sharks exist but what about the following

                                  1. The nigerian prince scammers don’t really say hey want your money for personal benefit, but dress up the message in the language of victimhood.
                                  2. Sexual predators feign weakness, especially if they are older men before making the victim unconscious.
                                  3. Pedophiles work in charities or armed forces but present themselves as pillars of community.
                                  4. Religious people commit evil on completely innocent people but dress it up in the language of love, justice and purity. You don’t think of nuns who steal babies as human traffickers.
                                  5. Communists preach egalitarianism but practice slavery under the guise of enemies of egalitarianism.
                                  6. Pharma companies preach healing but sell addictions.
                                  7. Under the guise of freedom of speech, pornographers exploit people from towns.
                                  8. Shell and coca cola are exemplars of making the world a better place.

                                  The list goes on and on. Almost every idea which seems innocent enough is abused by wolves in sheep’s clothing and not james bond antagonists. Maybe there is no such thing as sheep and we are all wolves. Heck even the open source contributors are abused under the guise of openness and community, while the parent company seeks funding.

                                  Social media companies, including Google, claim they are making the world a better and connected place while allowing sexualisation of pre-teens and enabling predators on their platforms. They are selling private user data, allow non-state actors to influence elections, let unverified stories to run amok, abuse copyright protections and run behavioral experiments on users. How difficult is it to enable age verification ? You can always store sha(government-id) or use credit cards to verify age.

                                  We merely have to ask the question are Google and Apple, wolves in sheep’s clothing ? The answer is obviously yes. Apple is a tobacco company. In what ways can they be stopped ? I don’t think limited liability is the answer.

                                  1. 3

                                    It’d probably be a good idea to strip out some of the more, um, controversial items from your comment to avoid a hellthread here litigating offtopic matters.

                                3. 7

                                  We live in an age of anxiety, where there is an ever increasing number of things that you are supposed to be anxious about.

                                  No offense, and I honestly mean that, but it feels as though you’ve got a little more anxiety going on than most of us. One valid way to deal with anxiety is to accept that some things are just facts of life in the modern world. For example, I use an ad-blocker, I don’t use Chrome, and I choose devices and services that are at least reasonably secure, but I gave up trying to control every piece of data I own because the attempt was causing me much more anxiety than just going with the (admittedly unfortunate) flow.

                                  Just a thought.

                                  1. 4

                                    “Don’t worry, be happy” is not a serious answer to anxiety. If you decide to surrender that’s your choice, but that doesn’t mean people preferring to fight a managed retreat and prevent a total rout are wrong to do so. At a minimum they will preserve their freedom longer than you and possibly even retake ground that you have ceded.

                                    https://www.history.com/news/7-brilliant-military-retreats

                              2. 2

                                How does the T14 compare to other ThinkPads you have used (eg the X1 carbon)?

                                1. 9

                                  I chose the T14 AMD w. Ryzen 4750 (8 cores, decent GPU) because I’m doing open source development and 3D graphics (not gaming), and I wanted this much power. Thicker than my old MacBook, but same mass. Easy to disassemble, lots of upgradeable components. The T14s is too thin, cooling system is inadequate for the 4750 CPU (according to notebookcheck): it runs too hot and throttles. Ryzen uses more energy but performance is comparable to an Apple M1 (faster on some benchmarks, slower on others). Fan noise hasn’t bothered me.

                                  According to reviews, T14 has a better keyboard than X1 carbon. X1 carbon has a better trackpad, but this trackpad can be ordered and installed in a T14 (many people on Reddit have done this). The X1 is limited to gen 10 intel + UHD graphics, too slow for my requirements. It maxes out at 16GB soldered RAM (not upgradeable), too small for my future requirements. Probably too thin to support the Ryzen 4750 with adequate cooling. The display options are better than the T14 AMD, that’s my one regret.

                                  1. 3

                                    I replaced my MacBook Air M1 by a T14 AMD a few months ago and like it very much as well!

                                    Fan noise hasn’t bothered me.

                                    Me neither. The fan is not very loud, definitely much more quiet than Intel MacBooks.

                                    lots of upgradeable components

                                    Love this aspect as well. I added an additional 16GB RAM (for 32GB RAM) and replaced the 512GB NVMe SSD by a 1TB NVMe SSD. There is still room for one more upgrade, since the WWAN slot can be used for some SSDs.

                                    The display options are better than the T14 AMD, that’s my one regret.

                                    Especially in Linux. On Windows the screen is quite acceptable with 150% scaling. Unfortunately, when enabling fractional scaling in GNOME, most X11 applications break (blurry upscaling).

                                    1. 1

                                      Unfortunately, when enabling fractional scaling in GNOME, most X11 applications break (blurry upscaling).

                                      I remember this problem with the X1 Gen3 which couldn’t scale 2x properly, so I could chose between things looking way too tiny or things looking way too large (and very little screen real estate). The 4K screen in the T14s is much better in that regard.

                                      But really the problem is that GTK+ 3 (at least) doesn’t support fractional scaling so things are just a complete mess.

                                      1. 1

                                        But really the problem is that GTK+ 3 (at least) doesn’t support fractional scaling so things are just a complete mess.

                                        For me on Wayland, GTK 3 applications work fine. AFAIK, they are rendered at a larger integer scale and then Mutter (?) downscales to whatever fractional scaling you use. This is pretty much the same approach as macOS uses.

                                        It’s XWayland where it goes wrong, though I think it was with an external screen hooked up, since XWayland does not support mixed DPI.

                                    2. 2

                                      The AMD variation is near perfect - but there is one downside to anyone, like me, who owns a Thunderbolt device (eg: LG Ultrafine 5k; I cannot go back to non-retina monitors having used this). It has no support for TB3 even with a dock.

                                      1. 3

                                        It sucks if you already have a Thunderbolt display, but it does drive 5k@60Hz over USB-C with DP-Alt (according to PSRef).

                                        1. 1

                                          Is there a demonstration of this actually working with any particular 5k monitor (of which there aren’t many)?

                                      2. 1

                                        The T14s is too thin, cooling system is inadequate for the 4750 CPU

                                        I own a T14s, and I can confirm the cooling system is absolutely inadequate.

                                        1. 1

                                          The fact that the 4K screen is only available in the T14(s) with Intel is the sole reason I got the Intel T14s (which apparently does not run crazy hot as the Intel T14). Also oddly the T14s can be ordered with 32 GB RAM unlike the X1, so you get a rather similar device with better specs and keyboard and a worse (non-replaceable) touchpad.

                                    1. 10

                                      The real gem is at the end of the article:

                                      I reported the issue to MSRC, but they ignored the bug report citing a need of PoC, which I had already provided, they had also expressed disbelief towards the exploitability of this bug.

                                      Supposedly there’s a whole new Microsoft but some things never change :-D.

                                      1. 6

                                        I never tried submitting a story on lobster.rs – maybe I’m missing something from that page and this isn’t straightforward, and my mind is stuck in the era of phpBB boards. Maybe this is more of a meta thing, too, I’m not sure what to think of it.

                                        I think it would be great if we could see more stories about things people here write. Over time, via IRC or PMs, I’ve learned of a bunch of cool stuff that people here do and, anecdotally, I think less than 10% of them ended up being posted here. That’s just not right, it should be the other way ’round.

                                        On the other hand I’d love to see some substance to these things. Not an elevator pitch or some other Y Combinator bullshit. Just, you know, be friendly/nice. Even when there’s nothing specifically interesting about it, like, you had a lazy afternoon and cobbled together a program that draws the Mandelbrot set. Could you at least say hi, I wrote this little Mandelbrot hack, do you like it? Anyone else tried something like this in Rust? Has anyone tried to draw it using thislib instead of thatlib?

                                        There’s a lot of potentially cool stuff here. In my other comment here I alluded to command injection vulns specifically because mutt was ridden with that way back, and a big part of the reason why that happened is that string processing in C is a dumpster fire. So this doesn’t have to devolve into a CVE-counting exercise, there’s more to Rust than refusing to compile things that would result in a segmentation fault.

                                        I mean, in case you specifically want to get hung up over this being written in Rust. There are other cool things about it, too, like the overall CLI interaction model.

                                        So there’s a bunch of cool stuff to talk about here but just throwing a link over the fence doesn’t invite that kind of talk.

                                        1. 3

                                          There’s what are you doing this week if you’re interested. There’s a weekend edition, as well. And in this particular submission’s case, it was posted by the author.

                                          1. 1

                                            We have a “show” tag for that and yes I agree it’s lovely to see people using it. <3

                                          1. 36

                                            Honest question: why should I care it’s written in Rust? I keep seeing these posts of new software and the authors highlight its “written in Rust.” I’ve never before seen such an emphasis on the language rather than the features it offers.

                                            1. 33

                                              I care that it isn’t written in C/C++. Memory safety catches a lot of security bugs. And language communities have different cultures, so knowing the actual language can be a signal as well.

                                              1. 17

                                                Okay, but in that case, it would be cool if the submission at least highlighted some of the neat use cases for which the language is relevant. E.g. if the description would at least mention an example – a particular module that’s very easy to get wrong in C, but Rust is particularly suited to, the way e.g. Julia is so well-suited for writing a FEM program. Or a “this module would’ve been 600 lines of inscrutable C but look how neat it is when you have explicit lifetime management features baked in the language”.

                                                If there’s none of that, but it’s just a very good program, that’s great (even better, in fact) – but at least let’s talk about that. Is it remarkably fast, in which case can we have some benchmarks? Is it super secure, as in, has anyone tried to do even an informal review, it’s cool that it’s written in Rust but what I’d really like to know is if someone checked it to make sure that attempting to view an attachment called /dev/null; rm -rf ~ won’t nuke my home folder, which is a far more straightforward exploit than anything involving memory safety.

                                                Better – hell, best yet – if it’s none of that, and the author just wrote a cool program and wants to share it with everyone else and wants some feedback. Great, but can we at least get that? Hey, fellow lobsters, here’s a thing I made, it’s super early, it won’t be big and professional like Outlook, do you like it? Would you like to send in a patch? What do you think about X?

                                                Otherwise it’s just another program written in Rust. I get it’s cool but hundreds of programs get written in Rust every day.

                                                As far as security bugs are concerned, if being written is C would be a red flag, what colour would you say is best ascribed to the flag raised by a tool whose installation script – which you’re supposed to curl straight into bash, of course – downloads an unsigned archive and `sudo mv’s the stuff in it into $PATH ;-)?

                                                1. 7

                                                  I believe 4 out of these 5 would’ve been unlikely if mutt and libraries were written in rust, for example:

                                                  https://www.cvedetails.com/vulnerability-list/vendor_id-158/product_id-274/year-2018/opov-1/Mutt-Mutt.html

                                                2. 10

                                                  Any GC language is memory-safe.

                                                  1. 3

                                                    Apart from the fact that garbage collection brings its own issues (although probably none that would affect a mail client), Rust offers much more than just memory safety.

                                                    1. 2

                                                      Which is why written in Go is also a popular thing, and deserves to be. People want single binaries and fast, safe programs but for whatever reason they also want to pretend there’s no reason to care what language something is written in.

                                                    2. 2

                                                      What is the most likely security attack surface for a email client?

                                                      1. 10

                                                        Untrusted input: message body, attachments, headers; protocol implementation (tls negotiation, authentication)? [ed: and in particular string handling and path handling]

                                                        1. 2

                                                          This is a great argument for making MUAs just deal with MH/Maildirs and leaving the server interface to existing programs (mbsync, msmtp).

                                                          Not only do you sidestep a good chunk of problems you mentioned - no worries about protocols, network, etc - you also are likely to fit into existing workflows. And it engenders trust: honestly, I’m unwilling to try software that speaks to my mail server. I risk anything from a bug inconveniencing me to something more malicious. Keep it local and I’m not as worried.

                                                        2. 2

                                                          HTML & images mostly

                                                          1. 1

                                                            If you want to support it, html display.

                                                        3. 29

                                                          I really have trouble understanding why people ask this. What’s so hard to understand about folks caring about which language a program is written in? There are literally oodles of reasons why it might be relevant. For example, if the title of the post were, “email client written in Zig,” it would actually attract my interest more than the current title. I would probably wind up spending some time reviewing the source code too. But if the title left that out, I probably would have skipped right by it.

                                                          1. 2

                                                            I think “written in [L]” makes sense, if the fact that it was written in a language is interesting. If a more complex program is written in APL, it is interesting because APL is know to be diffucult. If something is written in C89 is is interesting because that will probably make it very portable. If something is written in Zig, it might be interesting because a lot of people are not familiar with it’s strengths and weaknesses in real world systems. If something is written in Go, it might be interesting because it provides a easy static binary that can be installed without a big fuss.

                                                            Most of the time, I’m not surprised about Rust because why shouldn’t you be able to write a CLI tool in Rust? It has been done over and over again. If writing something in Rust has practical advantanges (”… written in Rust making it 4x faster”, “… written in Rust avoiding 90% of all security issues”, …) then it might be interesting.

                                                            1. 13

                                                              One aspect of that is that what is “interesting” varies from person to person and from time to time. Just as an example, I know I would be more interested if the title were “written in Zig,” but I’m sure there are plenty of others that would be less interested because of it. And that actually makes the “written in Zig” part of the title useful. Because it lets people filter a bit more, even if it means it’s less interesting.

                                                              More to the point, “interest” is just one reason why “written in [L]” makes sense. It’s not the only reason. As others have mentioned, some programming languages tend to be associated with certain properties of programs. Whether that’s culture, barriers to contribution (for some definition of “barrier”), performance, UX and so on. Everyone here knows that “email client written in C” and “email client written in Rust” likely has some signal and would mean different things to different people.

                                                              I truly don’t understand why people are continually mystified by this. It’s like the most mundane thing in the world to me. Programmers are interested in programming languages and how tools are built. Who woulda thunk it.

                                                              To be clear, this doesn’t mean everyone has to be interested in the underlying technology 100% of the time either. So I’m under no illusions about that. Most of the users of my software, for example, not only don’t care what language it was written in, but probably don’t even know. I’d even bet that most of my users (via VS Code) not only don’t know what language their “find in files” is written in, but probably haven’t even heard of Rust.

                                                              But we’re on a tech forum. It self selects for nerds like us that like to talk shop. What a surprise that we would be interested in the tools used to build shit.

                                                              Apologies for the minor rant. This is just one of those things that pops up over and over on these forums. People are continually surprised that “written in [L]” matters to some people, and I guess I’m just continually surprised that they’re continually surprised. ¯\_(ツ)_/¯

                                                            2. 1

                                                              Yeah, I agree that the underlying tech can be interesting and makes sense in some cases to be in the title. We’re all hackers lobsters here, right?

                                                              I’m a little surprised you’d show so much interest in Zig. I think of you as one of the “gods of rust”. Are you interested in a “keeping tabs on the competition” sort of way? Or is there some use case that you think Zig might shine more than rust for? In other words: are you interested in ideas you can bring to rust, or because you’re evaluating or interested in using Zig in its own right?

                                                              1. 4

                                                                No, I’m legitimately interested in Zig. I’ve always loved the “simplicity” of C, for example, for some definition of simplicity. (That one can cut a lot of different ways.) It’s also why I really like Go. And I think Zig is taking an interesting approach to memory safety and I’m very interested to see how well it work in practice. I’m also quite interested to see how well comptime does and how it balances against readability and documentation in particular.

                                                                But I haven’t written a single line of Zig yet. I’m just following it with interest. I’m also a Zig sponsor if only for the amazing work that Andrew is doing with C tooling.

                                                            3. 11

                                                              Personally I care because I am trying to learn Rust and projects like this are nice to explore and figure out stuff.

                                                              1. 6

                                                                I’m not sure either. I do occasionally see the “written in Go” or “written in Crystal” or “written in pure C99”, however.

                                                                1. 4

                                                                  This was a trend 5 years ago with Python I feel, now it’s a trend with Rust. In case of Python, in my experience, it boiled down to “we improved the UI massively (compared to existing alternatives) and our error handling is nonexistent”, while with Rust it’s more likely to be “we’re obsessed about efficiency and everything else is secondary” ;)

                                                                  In practice, the “in X” is likely a call for contributors, not the users – as a user, when I see “it’s written in X” I assume that it’s probably got no real upsides aside of that, as if writing it in X was the whole point.

                                                                  1. 3

                                                                    As an email client for users, it isn’t interesting at all (no disrespect to the creator). But, as an expression for the possibilities of an up-and-coming language, it is useful. This post has the same similar feel to a “hello world” for a new language.

                                                                    1. 3

                                                                      It makes it interesting to me because I’m interested in Rust, so I’d like to check out the source and learn something!

                                                                      1. 2

                                                                        I first thought “written in Rust” seems boisterous, then rethought and realized it’s beneficial specifying such, not just in boistering, but as an example for folks wanting to learn.

                                                                        1. 2

                                                                          People still writing new software in C in TYOOL 2021 also like to brag online about their choice of language. I don’t get it.

                                                                          1. 2

                                                                            Well,one reason to care is to make sure they don’t fall victim to the RIIR question from the Rust Evangelism Strike Force. (Have you considered rewriting it in rust?) (https://transitiontech.ca/random/RIIR).

                                                                            (Note: this is a joke. You probably don’t care about rust and nor should you, but the author does.)

                                                                          1. 13

                                                                            All those things in “Linux also hides some gems” are super cool because they reveal the other, uh, problem, and the reason why OS X sucked so much life out of the Linux desktop back in the 00s. And still does.

                                                                            6 months are enough to see these cool things. In another 6-10 years, on the other hand:

                                                                            • Tracker/search will probably be at the third or fourth incarnation. About as many apps will support its latest incarnation as today. If you multiply that by 3 or 4 it’ll be a decent number – but otherwise it’ll probably be just enough.
                                                                            • Most of the extensions being mentioned won’t work anymore
                                                                            • Gimp and Inkscape will be using GTK 4 and will be a little weird because everything else will be on GTK 5
                                                                            • Journal will not really work anymore because one of its dependencies related to font-rendering will be effectively abandonware. Most distributions will stop packaging it.
                                                                            • Nautilus will go through another major redesign. It will not be able to show icons smaller than 64x64. Finder will most certainly still be a dumpster fire but it’ll still be the same dumpster fire you remember from 2021.
                                                                            1. 7

                                                                              Finder will most certainly still be a dumpster fire but it’ll still be the same dumpster fire you remember from 2021.

                                                                              The truth hurts. Since Mac OS X just turned 20 years old, I’ve been rereading Siracusa’s old reviews. The Finder is more reliable now, but much of its behavior is just as inscrutable as in the brushed metal days.

                                                                              1. 10

                                                                                The original Finder was an odd beast for two reasons:

                                                                                First, it was a Carbon application. Back when OS X launched, Apple wasn’t sure if they could get developers to adopt Objective-C and so they had three developer environments. Carbon was a pure-C update of the classic MacOS Toolbox, layered on top of CoreFoundation (C APIs giving similar functionality to the NeXT Foundation Kit), Cocoa (a slightly updated OpenStep) and ‘Mocha’ (officially called something no one remembers), which bridged Java with Cocoa, allowing developers to write native Mac apps in Java. The Mocha stack was really impressive at the time. It used much less memory than other JVMs (it was the first to share standard library class data across instances) and it bridged things like Java arrays and strings transparently with their Cocoa counterparts. NeXT had largely given up getting developers to use Objective-C and had rewritten their flagship WebObjects product family in Java, so Apple thought that it was probably the future.

                                                                                Second, it was really two applications. There was a big internal fight going on between the Apple and NeXT folks, who both had good and self-consistent UI models that didn’t compose well. It tried to implement both the Classic MacOS spatial Finder and the NeXT File Browser UIs in the same application. It was aggressively modal as a result: you could switch between the two modes by either pressing a button or (sometimes) navigating to a different directory.

                                                                                Eventually they gave up on the spatial model. It worked really well in the old Apple System days, when people typically had a few dozen files and spatial memory helped find them. It doesn’t scale at all to thousands of files because people’s spatial memory doesn’t scale that well.

                                                                              2. 7

                                                                                Gimp and Inkscape will be using GTK 4 and will be a little weird because everything else will be on GTK 5

                                                                                When I started with Linux/BSD around 2000, everything used their own toolkit. Had 10 applications? Chances are you had at least 4 different toolkits, 4 different looks, 4 different file dialogs.

                                                                                Things have improved a lot since then especially if you’re using one of the desktop environments that come with a suite of applications (although KDE offered a lot of this in 3.x as well; I never understood why GNOME ended up getting the bigger mindshare, as KDE always seemed miles ahead to me).

                                                                                1. 2

                                                                                  Eh, things have improved in the last 20 years as in there are now only two toolkits. However, the level of integration that was possible 10-15 years ago, via things like Qt’s gtk2-style or QtCurve, is long out of reach by now. Things look less weird only insofar as KDE is shipping a Breeze theme for GTK3, with various quirks because GTK3’s CSS is a little quirky – but you still get two different file dialogs, two always slightly (at best, if you’re using Breeze) different looks, two vastly different interaction models (e.g. the infamous “single click browse totally not a bug” in GTK3 from a while back, different scrollbar behaviour etc.).

                                                                                  If you stick to a single desktop environment that’s not a problem, but you could stick to a single desktop environment back in 2003, too :-).

                                                                                  1. 1

                                                                                    If you stick to a single desktop environment that’s not a problem, but you could stick to a single desktop environment back in 2003, too :-).

                                                                                    From what I remember it was a lot less smooth though; the desktop environments now are a lot more integrated and complete than they were in the Gnome 1.x and early Gnome 2.x days.

                                                                                    you still get two different file dialogs, two always slightly (at best, if you’re using Breeze) different looks, two vastly different interaction models (e.g. the infamous “single click browse totally not a bug” in GTK3 from a while back, different scrollbar behaviour etc.).

                                                                                    Two is better than four? 🙃 But there is clearly still some work ahead; arguably things like file selectors shouldn’t even be a part of the toolkit but just an independent process, which would also solve things like this.

                                                                                2. 4

                                                                                  I think this is because we have no language that people like for doing GUI development. We see a lot of web based applications for this reason. There are some hopes that rust will have good GUI eventually and maybe that will fix some things. You are right though, we are still working on the foundations. If you live in a terminal (which many linux users gravitate towards) then you feel this much less.

                                                                                  1. 7

                                                                                    we have no language that people like for doing GUI development

                                                                                    This is pretty much it.

                                                                                    I’d love to write/hack/contribute to GUI applications on Linux, but I have pretty much the choice between the 1970ies’ garbage language (C), the 1980ies’ tire fire (C++) or various “bindings” that are usually incomplete, completely undocumented, and even more complicated to use than going with C directly.

                                                                                    that rust will have good GUI

                                                                                    Not sure Rust is a good language for this: there is a lot of complexity that one simply doesn’t care about when writing GUI applications.

                                                                                    1. 2

                                                                                      GTK’s bindings are fairly complete because they are autogenerated.

                                                                                      Python - https://pygobject.readthedocs.io/en/latest/

                                                                                      Vala - https://valadoc.org/

                                                                                      JS - https://gjs-docs.gnome.org/

                                                                                      1. 1

                                                                                        None of these languages are a large enough improvement over C/C++, considering that none of bindings allow you to not know the underlying C/C++ artifacts anyway.

                                                                                    2. 6

                                                                                      JavaScript being so trendy is certainly responsible for the current crop of “native” (i.e. Electron) apps but the rest of the situation is entirely of our own (i.e. the open source community’s doing). If you compile a 15 year-old Win32 application that is effectively “finished”, you get that app running against today’s Win32 API, with 30 years of bugfixes and up-to-date support. If you compile XMMS from 15 years ago, or the old KDE 3 applications, you’ll get applications running against GTK 1.x or Qt 3.x.

                                                                                      Basically, the “modern” Linux desktop isn’t a 30 year-old environment, it’s just the latest, third or so, in a series of 10 year-old environment. It’s pretty much where the Windows desktop was back in 2003 or so – way better than back in 2014, for example, but pretty much where it was back in 2010.

                                                                                      1. 1

                                                                                        I have been shilling guix and nixos elsewhere in this thread so I’ll continue hammering this: that’s exactly a problem that reproducible builds solve.

                                                                                        1. 3

                                                                                          The big problem isn’t that it’s hard to compile XMMS because it’s hard to compile GTK 1 anymore – which is what NixOS & co helps you with – the problem is that it’s a bad idea. If you’re compiling a 20 year-old Win32 application today, you’ll be linking it against libraries that are still supported and have 30 years of bugfixes. If you’re compiling a 20 year-old GTK app, you’ll be linking it against a library that has been abandoned for 15 years, has shaky UTF support. Specifically for XMMS you probably won’t be able to get it make a hiss, either, because none of the sound servers it supports still work (maybe the ALSA plugin would still work, not sure). NixOS & co. certainly make it easy to compile and deploy long-abandoned code, but once you’ve done that, it’s not like you can do much with it anymore.

                                                                                          The fact that Win32 is backwards-compatible to such an embarrassing degree doesn’t just mean it’s annoying to write against, it also means that 30 years’ worth of applications still get 30 years’ worth of bugfixes and improvements. GTK, and KDE Frameworks being what they are, means you get a few years’ worth of applications, with a few years’ worth of improvements and bugfixes.

                                                                                          1. 1

                                                                                            Note that GNUstep can still compile most applications that were written for mid ‘90s OpenStep, as well as more recent Cocoa applications (though, at this point, it’s a decade behind Cocoa in a lot of places). There are a bunch of other open source projects that have similar or better backwards compatibility guarantees. In the open source world, I’d settle for source compatibility (not needing binary compatibility) but that’s rare among the big buzzwordy projects. I suspect that this is because a lot of the commercial funding for these comes from entities that want to keep you on an upgrade treadmill because their business involves selling support and certifications.

                                                                                  1. 2

                                                                                    Most of you have probably seen this by now but I’ll leave it here for those who haven’t.

                                                                                    Also…

                                                                                    1990s Pentium PC WWW

                                                                                    2000s Laptop Web 2.0

                                                                                    2010s Smart Phones Apps

                                                                                    2020s Wearables TBD

                                                                                    2030s Embeddables TBD

                                                                                    I’ve seen this table in 2000 and 2010 and now again in 2020. Each time the “wearables” is touted as next decade’s big thing. I think it’s something that we won’t be able to achieve before the year of Linux on the desktop :-).

                                                                                    Granted, people have been singing dirges for the personal computer since about that same time, too. First it was thin clients (were it not for that stupid slow-ass network!). Then it was phones and tablets (were it not for them simpletons whose work did not consist of forwarding emails and attending meetings). But, you know, if you predict things at a high enough rate, some of them are bound to come true.

                                                                                    1. 2

                                                                                      2020 smart watch, fitness armbands

                                                                                      They are not as dominant as the others though.

                                                                                      1. 1

                                                                                        I regularly take walks without my phone, wearing my cellular watch streaming audiobooks and podcasts to my wireless earbuds, responding to messages through the voice assistant. No “smartglasses” yet, but wearables are important today and a huge growth area.

                                                                                        Still, yeah, doesn’t feel like anywhere near the the impact of PCs or smartphones. Once glasses get here, I think it will.

                                                                                      1. 3

                                                                                        I’m a bit puzzled by this article and I might be missing something. In the given example, depending on the type of the machine (big endian/little endian) one has to use different extraction methods for the uint32 in the network order. That’s exactly the use case for ifdef, if I were to build a binary for different architectures.

                                                                                        1. 7

                                                                                          In the given example, depending on the type of the machine (big endian/little endian) one has to use different extraction methods for the uint32 in the network order.

                                                                                          Not at all – if you read the example carefully, the author is making the point that, depending on the type of the peripheral (not the host machine!) you can extract the uint32 once, straight into native format, regardless of what the native format is.

                                                                                          That is, if you need to read a uint32_t, you can either:

                                                                                          a) Read it straight into a uint32_t on the host and swap the bytes as needed depending on host and peripheral byte order, or

                                                                                          b) Read it into an array of 4 uint8_ts, at which point the only variable in this equation is the peripheral order (because the result of data[0] << 0 | data[1] << 8 | data[2] << 16 | data[3] << 24 doesn’t depend on host order)

                                                                                          In terms of performance, things are a teeny tiny bit less black-and-white than the author makes it seem, depending on how smart the underlying compiler is and on how good the underlying architecture is at shifting bytes, unaligned access and the like.

                                                                                          But in terms of code quality my experience matches the author’s – code that takes route a) tends to end up pretty messy. This is particularly problematic if you’re working on small systems with multiple data streams, from multiple peripherals, sometimes with multiple layers of byte swapping (e.g. peripherals have their own byte order, then the bus controller at the MCU end can swap the bytes for you as well, and the one little-endian peripheral on that bus gives you 12-bit signed integers).

                                                                                          This is likely why the author hasn’t mentioned man byteorder, as @viraptor suggested. There’s no shortage of permissively-licensed byteorder-family functions for these systems if you’re not writing against a Unix-y system, but in these cases – where you get data from different peripherals, with different native byte orders, over different buses — the concept of “network” ordering is a little elusive. If you’re on a little-endian core you do ntoh conversions for big-endian peripherals, but what do you do for little-endian peripherals? Presumably, not “htoh” (note for confused onlookers: there’s no htoh ;-)), you leave the result as is, but in that case your code isn’t portable for big-endian cores. *to* functions implicitly rely on the relationship between network and host order, which works okay when the network byte order is clear and homogenous, but – as the author of this post points out – it breaks down as soon as you deal with external byte streams of multiple endiannesses.

                                                                                          (Edit: this is a point that Rob Pike, and others from the Plan 9 team, have made over the years. I thought this was someone echoing that point but lol, turns out this is Pike’s blog?)

                                                                                          1. 1

                                                                                            If you’re on a little-endian core you do ntoh conversions for big-endian peripherals, but what do you do for little-endian peripherals?

                                                                                            In that case use more modern https://linux.die.net/man/3/endian

                                                                                            htobe32 / htole32 have you covered. htonl is just nicer in cases where you don’t give people choice - network is network, don’t think about which one is it specifically.

                                                                                            1. 6

                                                                                              The author’s argument is that portable code should be endianness-independent, not that it should handle endianness with syntactic sugar of the right flavour. The “modern” (meh, they’re about 20 years old at this point?) alternatives work around the ambiguous (and insufficiently diverse) typing of the original API but don’t exhibit all the desirable properties of the version that Pike proposes.

                                                                                        1. 17

                                                                                          I’m trying to find a charitable interpretation for the fact that “avoid installing security updates because this distribution tool can’t handle updating in a secure manner” has ever even been considered as a form of best practice. Charitable as in not leaning towards “web developers gonna web develop”, which I would’ve been happy with 15 years ago but I realise perfectly well that’s not the right explanation. I just can’t, for the life of me, figure out the right one.

                                                                                          Can someone who knows more about Docker and DevOps explain this old Unix fart why “packages inside parent images can’t upgrade inside an unprivileged container” is an argument for not installing updates, as opposed to throwing Docker into the thrash bin, sealing the lid, and setting it on fire?

                                                                                          1. 13

                                                                                            This is not a problem with Docker the software. Docker can install system updates and run application as non-privileged user. The article demonstrates how, and it’s not like some secret technique, it’s just the normal way documented way.

                                                                                            This is a problem with whoever wrote this document just… making nonsensical statements, and Docker the organization leaving the bad documentation up for years.

                                                                                            So again, Docker the software has many problems, but inability to install security updates is not one of them.

                                                                                            1. 1

                                                                                              Has that method always worked? Or is it a recent addition for unprivileged containers? I’m just curious to understand how this ended up being the Docker project’s official recommendation for so many years that it ended up in linters and OWASP lists and whatnot. I mean none of these cite some random Internet dude saying maybe don’t do that, they all cite the program’s documentation…

                                                                                              1. 5

                                                                                                When I (and the documentation in question) say “unpriviliged” in this context it means “process uid is not root”.

                                                                                                There’s also “unpriviliged containers” in the sense that Docker isn’t running as root, which is indeed a new thing also is completely orthogonal to this issue.

                                                                                                1. 1

                                                                                                  Now it sounds even weirder, because the documentation literally says “unprivileged container”, but I think I got your point. Thanks!

                                                                                            2. 5

                                                                                              Well, the article did point out that you can upgrade from within Docker. The problem is that the OS running inside Docker can’t assume it has access to certain things. I only skimmed the article, but I think it mentioned an example where updating an Linux distro might cause it to try to (re)start something like systemd or some other system service that probably doesn’t work inside a Docker container.

                                                                                              However, that really doesn’t address your main point/question. Why was this ever advice? Even back in the day, when some OSes would misbehave inside Docker, the advice should have been “Don’t use that OS inside Docker”, not “Don’t install updates”.

                                                                                              I think the most charitable explanation is that developers today are expected to do everything and know about everything. I love my current role at my company, but I wear a lot of hats. I work on our mobile app, several backend services in several languages/frameworks, our web site (ecommerce style site PHP + JS), and even a hardware interfacing tool that I wrote from scratch because it only came with a Windows .exe to communicate with it. I have also had to craft several Dockerfiles and become familiar with actually using/deploying Docker containers, and our CI tool/service.

                                                                                              It’s just a lot. While I always do my best to make sure everything I do is secure and robust, etc, it does mean that sometimes I end up just leaning on “best practices” because I don’t have the mental bandwidth to be an expert on everything.

                                                                                              1. 2

                                                                                                it mentioned an example where updating an Linux distro might cause it to try to (re)start something like systemd or some other system service that probably doesn’t work inside a Docker container.

                                                                                                That’s not been true for years, for most packages. That quote was from an obsolete article from 2014, and only quoted in order to point out it’s wrong.

                                                                                                1. 2

                                                                                                  I didn’t mean to imply that it was! If you read my next paragraph, it might be a little more clear that this isn’t an issue today. But I still wonder aloud why the resulting advice was ever good advice- even when this particular issue was common-ish.

                                                                                                  1. 1

                                                                                                    AFAICT the current version of best practices page in Docker docs was written in 2018 (per Wayback Machine), by which point that wouldn’t have been an issue. But maybe that’s left over from an older page at a different URL.

                                                                                              2. 5

                                                                                                I am not a Docker expert (or even user), but as I understand the OCI model you shouldn’t upgrade things from the base image because it’s a violation of separation of concerns between layers (in the sense of overlay filesystem layers). If there are security concerns in the base packages then you should update to a newer version of the image that provides those packages, not add more deltas in the layer that sits on top of it.

                                                                                                1. 2

                                                                                                  That makes a lot more sense – I thought it might be something like this, by analogy with e.g. OpenEmbedded/Yocto layers. Thanks!

                                                                                                  1. 1

                                                                                                    This doesn’t hold water and is addressed in the article.

                                                                                                    The way Docker containers work is that they’re built out of multiple, composable layers. Each layer is independent and the standard separation of concerns layer based.

                                                                                                    So after pulling a base container, the next layer that makes sense is to install security updates for the base image. Any subsequent changes to the base image will re-install security updates.

                                                                                                    Often base images are updated infrequently, So relying on their security update is just allowing security flaws to persist your application.

                                                                                                    1. 1

                                                                                                      To me, an outsider who uses Docker for development once in a while but nothing else, a separate layer for security updates doesn’t make much sense. Why would that be treated as a separate concern? It’s not something that is conceptually or operationally independent of the previous layer, something that you could in principle run on top of any base image if you configure it right – it’s a set of changes to packages in the parent layer. Why not have “the right” packages in the parent layer in the first place, then? The fact that base images aren’t updated as often as they ought to be doesn’t make security updates any more independent of the base images that they ought to be applied to. If that’s done strictly as a “real-world optimisation”, i.e. to avoid rebuilding more images than necessary or to deal with slow-moving third parties, that’s fine, but I don’t think we should retrofit a “serious” reason for it.

                                                                                                2. 3

                                                                                                  Charitable as in not leaning towards “web developers gonna web develop”

                                                                                                  I kind of want to push back on this, because while it’s easy to find examples of “bad” developers in any field of programming, I think it’s actually interesting to point out that many other fields of programming solve this problem by… not solving it. Even for products which are internet-connected by design and thus potentially exploitable remotely if/when the right vulnerability shows up. So while web folks may not be up to your standards, I’d argue that by even being expected to try to solve this problem in the first place, we’re probably ahead of a lot of other groups.

                                                                                                  1. 1

                                                                                                    Yeah, that’s exactly why I was looking for the right explanation :-). There’s a lot of smugness going around that ascribes any bad practice in a given field to “yeah that’s just how people in are”, when the actual explanation is simply a problem that’s not obvious to people outside that field. Best practices guides are particularly susceptible to this because they’re often taken for granted. I apologise if I gave the wrong impression here, web folks are very much up to my standards.

                                                                                                1. 6

                                                                                                  I find it slightly amusing that you cannot implement this program in pure ANSI C because it has no concept of folders or recursing them, so you need a platform library like POSIX or WinAPI.

                                                                                                  1. 6

                                                                                                    That’s either really bad or really good, depending on how you look at it…

                                                                                                    On the one hand, it means you have to bring your own abstraction, which sucks.

                                                                                                    On the other hand, it means you don’t have to bolt new platforms on top of existing abstractions, which also sucks. For example, Common Lisp had an extraordinarily powerful and flexible path system that consistently blew anything from the ’90s and ‘00s out of the water – which, of course, also meant that there was a great deal of impedance mismatch between that and whatever filesystem abstraction the underlying operating system had. Thinking in terms of the CL abstraction layer was great, but ultimately difficult, because application users thought in terms of their platforms, not in terms of whatever the standard committee had in mind back in the eighties. Also, an embarrassing amount of CL code ended up calling native file manipulation functions via a FFI because reliably mapping each platform’s abstractions to CL’s was a somewhat unpleasant exercise.

                                                                                                    I suspect C’s longevity is partly due to the fact that it’s small enough that it did not pose significant obstacles to getting it to run on platforms developed long after the PDP-11, with (or without) all sorts of peculiar extensions. I’m not saying it’s a good thing, it’s just a thing.

                                                                                                  1. 7

                                                                                                    The main points are: stability, portability and obsolescence and how they are a struggle.

                                                                                                    But then the author moves to the latest MacOS? Where is the stability? Apple is famous for breaking compatibility and biting the bullet whenever they can push a new proprietary API to ensnare devs (Metal?). Where is the portability (Apple only cares about the hardware they sell of course). And where is the (lack of) planned obsolescence? This is the whole long-term strategy of Apple: tech as fashion and short hardware update cycles.

                                                                                                    So this is why the author leaves linux desktop? He could run a recentish notebook, with ARM or x86 cores and linux would be perfectly fine. None of those issues would be valid then.

                                                                                                    This is a weird take.

                                                                                                    1. 12

                                                                                                      But then the author moves to the latest MacOS? Where is the stability?

                                                                                                      On the user side of things :). A few months ago I got one of them fancy M1 MBPs, too, after not having used a Mac since back when OS X was on Tiger. Everything that I used back in 2007 still worked without major gripes or bugs. With a few exceptions (e.g. mutt) the only Linux programs that I used back in 2005 and still worked fine were the ones that were effectively abandoned at some point during this period.

                                                                                                      Finder, for example, is still more or less of a dumpster fire with various quirks but they’re the same quirks. Nautilus, uh, I mean, Files, and Konque… uh, Dolphin, have a new set of quirks every six months. At some point you eventually want to get off the designer hobby project train.

                                                                                                      In this sense, a lot of Linux software isn’t really being developed, as in, it doesn’t acquire new capabilities. It doesn’t solve new problems, it just solves the old problems again (supposedly in a more “usable” way, yeah right). It’s cool, I don’t want to shit on someone’s hobby project, but let’s not hold that against the people who don’t want to partake.

                                                                                                      (Edit: to be clear, Big Sur’s design is hot garbage and macOS is all kinds of annoying and I generally hate it, but I wouldn’t go back to dealing with Gnome and GTK and Wayland and D-Bus and all that stuff for the life of me, I’ve wasted enough time fiddling with all that.)

                                                                                                      1. 10

                                                                                                        At some point you eventually want to get off the designer hobby project train.

                                                                                                        THIS, so much.

                                                                                                        1. 1

                                                                                                          Well, just step off then?

                                                                                                          Unlike Apple, you have some options with open source. Don’t like the latest Gnome craze? Get MATE, which is basically Gnome 2. There are lots of people who keep old window managers and desktop environments alive and working. The Ubuntu download page lists a couple, but many more can be installed with a few commands.

                                                                                                          I think I have been running the same setup for six or seven years now, no problem at all.

                                                                                                          1. 3

                                                                                                            If you try to grab a Gnome 2 box, you’ll find that Mate is pretty different even if the default screen looks about the same. Not because of Mate but because of GTK3 general craziness. Sure, the panels look about the same, but as soon as you open an application you hit the same huge widgets, the same dysfunctional open file dialog and so on. It’s “basically the same” in screenshots but once you start clicking around it feels pretty different.

                                                                                                            If all you want is a bunch of xterms and a browser, you got a lot of options, but a bunch of xterms and a browser is what I used back in 2001, too, and they were already obsolete back then. The world of computing has long moved on. A bunch of xterms and a browser is what many, if not most experienced Linux users still use simply because it’s either that or the perpetual usability circlejerk of the Linux desktop. I enjoy the smug feeling of green text on black background as much as anyone but at some point I kindda wanted to stop living in the computing world of the early 00s.

                                                                                                            I’ve used the same WindowMaker-based setup for more than 10 years, until 2014 or so, I think. After that I could technically keep using it, but it was mostly an exercise in avoiding things. I don’t find that either fun or productive. I kept at it for 6+ years (basically until last year) but I hated it.

                                                                                                            (Edit: imho, the options are really still the same that they were 15 years ago: Gnome apps, KDE apps, or console apps and an assortment of xthis and xthat from the early/mid-90s – which lately mostly boils down to “apps built for phones” and “apps built for the computers of the Hackers age”. Whether you run them under Gnome, KDE, or whatever everyone’s favourite TWM replacement is this year doesn’t make much of a difference. Lots of options, but not much of a choice.)

                                                                                                    1. 12

                                                                                                      Oh God, I haven’t finished so many things that I’ve long lost count of them. I have a long series of projects in various states of unfinished. I don’t really regret it, most of them are unfinished because there was something I really wanted to do, and I did it, and the rest of the project was just an excuse to do that particular thing. Others, especially those that I did in a professional context, were cut short by budget and/or time constraints. But it’s fun to reminisce about them. In no particular order, some of the things I started and never finished in the last 15 years or so include:

                                                                                                      • Single-board computers, based on various processors (6809, Z80, 8086, we-don’t-need-no-stinking-microprocessor-I’ll-just-wire-my-own – the Z80-based one is that sorta made it to the breadboard stage). But I did build/play with various parts that I didn’t understand well enough – like clock generators or the most expensive programmable interrupt controller in history, an 8259-ish clone that “ran” on a Zynq-7000 development board because that’s what I had lying around at work. Honestly, the biggest reason why none of these got finished is that I didn’t really want to build a whole computer, I just wanted something with front-panel switches. I have some killer front panel designs, I just don’t have a computer to plug them into :-D.
                                                                                                      • Sort of in the same vein, an emulator. I must have started dozens of them but never finished one. It’s one of those goals I never accomplished but one day it’s gonna happen.
                                                                                                      • A debugger/monitor for small (e.g. MSP430) systems (context: I was working on an operating system for that kind of devices at the time – that one was actually finished, put in boxes and sold and all – and I wanted to test/prototype various pieces of peripheral code/drivers without the whole OS behind me, but I also wanted to be able to poke at things in memory in an interactive manner and so on, and debugger support at the time was really bad on some of the platforms we needed, including the MSP430). It sort of happened but I never used it enough to polish the rough edges. It was actually useful and interesting – at the expense of a little flash space, you got an interactive debugger of sorts over a serial port that allowed you to “load” (eh) run and edit small programs off of a primitive filesystem. Realistically, it was mostly a waste of time: this wasn’t a microcomputer, it “ran” on MCUs inside various gadgets. The time it took to “port” it to a new one vs. what you got in return just wasn’t worth it.
                                                                                                      • A SDR-based radiotelescope. I had a pair of Ettus Research SDR boxes more or less all to myself for a few months and I could play with them more or less at will as long as I didn’t break them, but the company I was at went under before I got to try anything (my knowledge of antennae was, uh, I’d say rudimentary but that would probably be overselling it). I did get to write some antenna positioning code that I later integrated into some real-life firmware at $work so it wasn’t all wasted.
                                                                                                      • A Star Trek meets Rogue, uh, I’d say rogue-like? Unfortunately implementing all the cool things (random story generators! random races with political intrigue and all! Gandalf-like figures roaming the galaxy!) was way more fun than implementing the actual game so I ended up with 40,000 lines of Java that spit galaxy news in a log file and nothing else. I learned a lot about sparse matrices though – that was actually the whole reason why I wanted to get into it in the first place (tl;dr I wanted to model something that supported tens of thousands of star systems with millions of ships and so on) – and of all the projects in this list, it’s the one that would’ve probably been easier to make into something cool. I tried to restart it at some point, then I learned about Dward Fortress and I honestly couldn’t see the point anymore :-).
                                                                                                      • A software synthesiser that tried to use some pretty advanced physical models to generate wind instrument sounds. Unfortunately I got so bogged down into the modelling side of things that by the time I had some basic prototypes, integrating them into a program worth using wasn’t really fun anymore, and I also couldn’t (still can’t…) really play any wind instrument so my understanding of these things was limited. I later tried to do a more ambitious synthesiser for a harp (tl;dr also software-driven but it used lasers instead of strings) for my final year university project but that never happened, and while I have a crude hardware prototype tucked in a closet somewhere, I never got around to writing any of the hard parts of the software. The biggest problem I had, and the main reason why this didn’t get anywhere, is that I just didn’t understand enough about real-time audio processing to get something useful. I still don’t.
                                                                                                      • An Amiga Workbench clone for Wayland. By the time enough of the cool features got implemented (e.g. multiple screens) I got so fed up with Wayland and Linux in general that I never wanted to finish it. Various bits and pieces, like an Amidock clone, got to a usable(-ish) state. This is the only project in this list that I didn’t really enjoy. I was already fed up with these things when I started it, I just didn’t really want to admit it. I don’t want to say anything about how this one could be improved and why I failed at it because I’m quite bitter over these things, but tl;dr I’d rather have all my teeth pulled out and swallow them than touch any of that stuff again.

                                                                                                      There were others, much smaller, these are the cool ones.

                                                                                                      All in all I think I finished very few of the side projects I started but I learned a lot out of all of them and many of them came in handy when doing stuff I actually got paid for. I have zero regrets for not finishing them. It’s important to finish some things but not all of them.

                                                                                                      Reading a long list of projects that failed sounds a bit like a long list of failures but really, they weren’t. I achieved most of my goals. If I had an infinite supply of free time I could probably finish the ones that were never finished because they were too ambitious for my level of knowledge at the time (e.g. the wind instruments thingie) but there are so many cool things that I don’t know how to make that it kindda feels pointless to use my free time doing the ones that I now know how to make.

                                                                                                      (Edit: I guess the point I’m trying to make is that no time spent hacking on something cool is truly lost, no matter what comes out of it in the end, and no matter how modest or grand the ambitions behind them. There’s a whole side project hustle mill going on these days and this whole “don’t send us a resume show us your Github profile” thing and I think it’s a con, and all it’s doing is making people afraid on doing things in their spare time, because they treat these things the way they treat projects they do at work. Computing was my hobby long before it became my profession, and it still is; finishing something “successfully” is besides the point when it comes to these things – their function is fulfilled as soon as a line of code is written or a piece of schematic is drawn, that brings me joy all by itself. Don’t fall into the trap of taking these things more seriously than you ought to. Most of us spend at least 8 hours/day agonising over whether something will be finished successfully or not – unless you enjoy that part of the job, there’s no reason to take it home with you.)

                                                                                                      1. 3

                                                                                                        I, too, have looked at Dwarf Fortress and concluded I couldn’t possibly top it. Perhaps the way to approach something like that is to bite off a chunk of DF and trying to make it better, more realistic, more complex, or more fun. Of course, a lot of the magic of DF is the interconnectedness of the complex systems. But I can imagine one person making a very complex 2 person battle/dueling system, or a complex home decorator, or a terrain generator that goes higher into the sky or deeper into the earth, or a DF with birds instead of dwarves.

                                                                                                        1. 2

                                                                                                          An Amiga Workbench clone for Wayland. By the time enough of the cool features got implemented (e.g. multiple screens) I got so fed up with Wayland and Linux in general that I never wanted to finish it. Various bits and pieces, like an Amidock clone, got to a usable(-ish) state. This is the only project in this list that I didn’t really enjoy. I was already fed up with these things when I started it, I just didn’t really want to admit it. I don’t want to say anything about how this one could be improved and why I failed at it because I’m quite bitter over these things, but tl;dr I’d rather have all my teeth pulled out and swallow them than touch any of that stuff again.

                                                                                                          Holy cow this sounds cool. I’m trying to envision what this even would look like.

                                                                                                          Specifically because as you’re well aware I’m sure an Amiga “screen” was kind of a different animal from anything that exists in a modern desktop context, and I don’t know how you’d enforce that kind of sliding behavior with modern windowing systems.

                                                                                                          I just recently saw this project which bundles a fully emulated Amiga system into a Visual Studio Code package so you can compile, debug and run your Amiga code from a modern environment.

                                                                                                          1. 1

                                                                                                            Specifically because as you’re well aware I’m sure an Amiga “screen” was kind of a different animal from anything that exists in a modern desktop context, and I don’t know how you’d enforce that kind of sliding behavior with modern windowing systems.

                                                                                                            It’s been done before (to some degree) on X11 as well, see e.g. AmiWM (I think? I might be misremembering it, but I think AmiWM supported sliding screens. e16 had support for something like this a long time ago but I don’t recall if it could do split-screen). I only implemented a very rudimentary prototype which worked sort of like the famous spinning cube thing, except instead of mapping each desktop surface on a spinning cube, I just mapped it on different screen sections. I wasn’t really planning on adding it so It was more of a hack I cobbled together, it only worked under some basic scenarios, but I’m sure it can be done with a little patience.

                                                                                                        1. 32

                                                                                                          I totally get what the OP is saying. I want technology that is stable, sustainable, liberating, and which respects my autonomy. But the only way we get there is free software. Not corporate-driven open source, and certainly not proprietary software. I wonder if Unix-likes are really the best foundation for building that future; I suspect they might not be.

                                                                                                          If you care about old hardware and alternative architectures, Apple isn’t your best bet. They only target whatever Apple is selling. They’re a hardware company; software is their side-hustle. Well, hardware and now rent-seeking in their gated community, by means of the app store. They’re also a company built on planned obsolescence and “technology as fashion statement”.

                                                                                                          As for stability, most of the big players don’t care, because the incentives aren’t present. The tech field has inadvertently discovered the secret of zero-point energy: how to power the unchecked growth of an industry worth trillions of dollars on little more than bullshit, a commodity whose infinite supply is guaranteed. Surely, this discovery warrants a Nobel Prize in physics. As long as the alchemists of Silicon Valley can turn bullshit into investments from venture capitalists, there will be no stability.

                                                                                                          For what it’s worth, I use an iPhone. It’s a hand-me-down; I wouldn’t have bought it new. It’s an excellent appliance, but I know that when I use it, Apple is in the driver’s seat, not me. And I resent them for it.

                                                                                                          1. 10

                                                                                                            But the only way we get there is free software. Not corporate-driven open source, and certainly not proprietary software.

                                                                                                            I was a big believe in that for a very long time, too, but I’m not too convinced that software being free is the secret sauce here. Plenty of of free software projects treat users, at best, like a nuisance, and are actively or derisively hostile to other projects, sometimes in plain sight (eh, Gnome?). There’s lot of GPL code that only has major commercial backers behind it, working on major commercial schedules and producing major commercial codebases, to the point where even technically-inclined users are, in practice, largely unable to make contributions, or meaningfully maintain community forks, even if the licensing allows it (see e.g. Chrome).

                                                                                                            I’m starting to believe that free licensing is mostly an enabler, not a guarantee of any kind. No amount of licensing will fix problems that arise due to ego, or irresponsibility, or unkindness. Commercial pragmatism sometimes manages to keep some of these things in check at the product development level (which, presumably, is one of the reasons why the quality of Linux desktop development has steadily declined as less and less money got poured into it, but I’m open to the possibility that I’m just being bitter about these things now…)

                                                                                                            1. 3

                                                                                                              Plenty of of free software projects treat users, at best, like a nuisance, and are actively or derisively hostile to other projects, sometimes in plain sight

                                                                                                              I can personally attest to this. I know it all too well.

                                                                                                              largely unable to make contributions, or meaningfully maintain community forks, even if the licensing allows it (see e.g. Chrome).

                                                                                                              I feel this is an argument for having slower, surer process as well. Write a spec before writing and landing something. Have test plans. Implement human usability studies for new UI features. A tree that moves slower is inherently more stable (because new bugs can’t find their way in, and old bugs have longer to be fixed before it’s rewritten) and gives more opportunity for community involvement. But I know this is a controversial opinion.

                                                                                                            2. 7

                                                                                                              I want technology that is stable, sustainable, liberating, and which respects my autonomy. But the only way we get there is free software.

                                                                                                              The only way we get there is with a society that is stable, sustainable, and respects you autonomy. And we haven’t had that since at least the Industrial Revolution (for certain, specific quantities of stability, sustainability, and respect; those have never been absolute). The switch from craftsmanship to mass production made it kinda a lost cause.

                                                                                                              1. 12

                                                                                                                But the only way we get there is free software.

                                                                                                                I strongly suspect that’s not really true. We have to admit that the vast majority of open-source contributors do not have the work ethic (and why would they, they’re working on open source in their spare time because it’s fun) to push a project from “it fits my needs” to “everyone else can use it”. The sad situation is that most projects are eternally stuck in the minimal viable product stage, and nobody is willing to put in the extra “unfun” 80% of work to polish them and make them easy to use and stable.

                                                                                                                I know this is a problem I’m having, and I doubt I’m the only one.

                                                                                                                1. 7

                                                                                                                  This really is the problem.

                                                                                                                  Nobody wants to do usability studies, even informal ones. Hell, I just gather a group of friends + my Mum and have them sit at my laptop and tell me what they like and don’t like about the FOSS I’m writing, and from what I’ve gathered my UIs are miles away better than most others.

                                                                                                                  Nobody wants to write a spec or a roadmap. Personally, I grew up loving discussing things and learning, and I view requirements gathering as an extension of both of those activities, so it’s really enjoyable for me.

                                                                                                                  I run my FOSS projects kind of like how most corps used to run corp dev (somewhere between waterfall and agile). I feel that the quality is higher than most others, though I admit it’s subjective. And, you always like what you make because you made it so it works the way you expect.

                                                                                                                  But in my opinion, process really is the problem. But if process was required, most FOSS wouldn’t exist, because nobody would really want to follow that sort of process in their free time. (Present company excluded.)

                                                                                                                  1. 7

                                                                                                                    I really want to see more UX designers get interested and involved in FOSS. It’s very clear that FOSS is driven primarily by sysadmins and secondarily by programmers. If we want a usable system then UX designers need to be involved in the conversation early and often.

                                                                                                                    1. 2

                                                                                                                      … and, with the risk of being catty, when there are UX designers, they are content in copying mainstream interfaces instead of innovating and trying to produce something original.

                                                                                                                      1. 3

                                                                                                                        In the case of Linux, they’re oftent content copying the obviously bad examples, too.

                                                                                                                    2. 4

                                                                                                                      Every once in a while I’ll get the up the gumption to get organized, or maybe clean the house. Then I’ll use that energy, and I’ll try to put a system in place to raise that baseline for good. Later on, when I don’t have the same zeal, whatever system I invented almost certainly fails.

                                                                                                                      The only systems that seem to survive are those that are dead simple. For instance, when I must remember to bring something with me, I leave it leaning against the front door. That habit stuck.

                                                                                                                      So when it comes to development process, do you think there’s some absolute minimum process that could be advocated and popularized to help FOSS projects select the tasks that matter and make progress? …Lazy agile?

                                                                                                                      1. 4

                                                                                                                        I leave it leaning against the front door. That habit stuck.

                                                                                                                        That was a particularly vivid flashback of my teenage years you just gave me. Wow.

                                                                                                                        do you think there’s some absolute minimum process that could be advocated and popularized to help FOSS projects select the tasks that matter and make progress? …Lazy agile?

                                                                                                                        I’m going to have to think very hard on this one. I’m not sure what that would look like. Definitely a thought train worth boarding.

                                                                                                                        1. 4

                                                                                                                          So when it comes to development process, do you think there’s some absolute minimum process that could be advocated and popularized to help FOSS projects select the tasks that matter and make progress? …Lazy agile?

                                                                                                                          This is why if I intend to work on FOSS that I feel might be “large” (e.g. something I can see myself working on for many months or years), I setup an issue tracker very early. For me, dumping requirements, ideas, and code shortcuts I’m taking into an issue tracker means that if I’m feeling sufficiently motivated I can power through big refactors or features that take a lot of work to add, but if I’m feeling less ambitious, I have some silly bug where I decremented the same count twice that I can fix that takes only an hour or so of my time and results in an actual, tangible win. That helps me keep forward momentum going. This is what helps me, at least.

                                                                                                                          1. 3

                                                                                                                            Interesting perspective. Thanks for chiming in!

                                                                                                                      2. 6

                                                                                                                        What are the problem areas for you? I’m genuinely curious as I have been using free open source as a daily driver for a few years now, and for a while the worst was the lack of gaming support. There are more linux specific titles on steam right now to occupy my time that I don’t even have to faff about with proton yet.

                                                                                                                        I’ve been in teiresias camp now for a while, free software is the future.

                                                                                                                        1. 3

                                                                                                                          I am fine. I love to tinker with things even if it prevents me from “doing the work”. As a very basic example: if my network stops working after a package update, I lose the 30m-1h to find what’s wrong and fix it. However there are people in the world where this kind of productivity loss is unacceptable.

                                                                                                                          I’m afraid that the open source community mostly develops for people like me (and you, from what you’re saying).

                                                                                                                      3. 11

                                                                                                                        The only way we get there is with a society that is stable, sustainable, and respects you autonomy. And we haven’t had that since at least the Industrial Revolution

                                                                                                                        Wait, what?

                                                                                                                        Since the Industrial Revolution we’ve had pretty much constantly increasing wealth, human rights, health, autonomy, throughout almost all the world:

                                                                                                                        https://ourworldindata.org/uploads/2019/11/Extreme-Poverty-projection-by-the-World-Bank-to-2030-786x550.png

                                                                                                                        Sub-Saharan Africa was and continues to be a basket-case, but I don’t think anyone is blaming that on the Industrial Revolution. (Actually, leaving aside the racists … what are people blaming that on? Why is it that the rest of the world is dragging itself out of povery, but Sub-Saharan Africa isn’t?)

                                                                                                                        1. 4

                                                                                                                          “Sub-Saharan Africa was and continues to be a basket-case, but I don’t think anyone is blaming that on the Industrial Revolution. “

                                                                                                                          You’re not accounting for the possibility of linkage between the shutdown of the annual monsoon of sub-saharan africa in the 1960’s to coal-burning in europe releasing sulphur dust, which rose during the industrial revolution. Obviously there were also political-driven things out in the area back then too. See one argument here: https://extranewsfeed.com/the-climate-doomsday-is-already-here-556a0763c11d , referencing http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.652.3232&rep=rep1&type=pdf and http://centaur.reading.ac.uk/37053/1/Dong_etal_revised2.pdf .

                                                                                                                          “Since the Industrial Revolution we’ve had pretty much constantly increasing wealth, human rights, health, autonomy, throughout almost all the world”

                                                                                                                          Be really careful with interpreting data-driven arguments (Pinker, Rosling, ourworldindata, etc.) as truth - they’re popular in the tech world, rationalist/progress circles, 80,000 hours, etc. I find myself unlearning parts of these narratives and trying to be more open minded to less rigorous arguments these days. The argument I’ve heard is that you can have someone in earning more money (which gets recorded on paper) but they may actually be undernourished compared to smallholder subsistence living (which data may translate as living in poverty). You also have to factor in the changes in ecological function / land use change with increase of people living in an urban niche.

                                                                                                                          As far as how all this relates back to free software, I don’t know enough - I see problems and interesting ideas both in free software movements and non-free software 🤷‍♂️

                                                                                                                          1. 5

                                                                                                                            As far as how all this relates back to free software, I don’t know enough - I see problems and interesting ideas both in free software movements and non-free software 🤷‍♂️

                                                                                                                            It was just me chiming in as usual whenever anyone blames capitalism, or industry, or so-on for all the world’s ills[1].

                                                                                                                            Capitalism and industry have been responsible for lifting billions out of the default state of humanity: miserable poverty, disease, and tribal warfare.

                                                                                                                            And in first-world countries, we’ve gone in one generation from “one expensive, fragile, and basically toy-like 8-bit microcomputer” to “everyone in the family owns multiple computers, including hand-held battery powered supercomputers with always-on high-speed Internet connections”. 90% of Australians, for example, are regular Internet users. 90%!

                                                                                                                            Meanwhile the proposed alternatives have been responsible for millions of deaths in the last century alone.

                                                                                                                            [1] Hyperbole, but not far off the mark.

                                                                                                                            1. 2

                                                                                                                              There needs to be a middle-ground between “pure” capitalism and “pure” socialism.

                                                                                                                              Both of them scare the living crap out of me.

                                                                                                                              But both of them also have very good, very useful ideas that the world needs to utilise.

                                                                                                                              1. 3

                                                                                                                                The good ideas present in socialism (like caring for those who, through no fault of their own, are incapable of caring for themselves) are in no way incompatible with pure[1] capitalism, and are also far from unique to socialism.

                                                                                                                                All that socialism implies is that people are forced to fund that care, as opposed to doing it voluntarily (through charity, mutual socities, unions, religions, etc.).

                                                                                                                                To put it in hacking terms: socialism is a brute-force kluge ;)

                                                                                                                                [1] By which I assume you mean laissez-faire.

                                                                                                                                1. 1

                                                                                                                                  That type of socialism is like the GPL, in enforcing behavior that one is afraid might not happen voluntarily, given there would be the more capitalist resources available to do so.

                                                                                                                                  A country with mandatory private health insurances use exactly the same maths to figure out costs, yet people see that as a huge problem because the USA fscked things up. I can attest to universal health care not being universal, not systematically, as the lines are long with a heavy emphasis on anything that’s not pre-emptive, even denying treatment due to cost.

                                                                                                                                  That worries me, since competing companies are incentivized to keep their customers. Is that closed-source software? Maybe.

                                                                                                                                  But often FOSS seems to behave like this monopoly superorganism that can do whatever, like the new Gnome UI stuff. Good thing there’s at least some competition.

                                                                                                                                  1. 1

                                                                                                                                    That type of socialism is like the GPL, in enforcing behavior that one is afraid might not happen voluntarily

                                                                                                                                    Except that it’s unlike the GPL in that if you don’t want to use GPL software, you’re free to choose something else. If you don’t want to license your software under the GPL, you’re free to choose a different license.

                                                                                                                                    Socialism doesn’t give those subject to it any choice in the matter.

                                                                                                                                    (Edited to clarify: as currently implemented by mainstream politics. Voluntary communes and the like are just fine by me. Not how I’d choose to live personally, but a perfectly valid choice. And, note, completely compatible with laissez-faire capitalism.)

                                                                                                                                    1. 1

                                                                                                                                      That’s a fair extension to my analogy, sure. This does certainly start to break down if people compare BSD-licensed contributions and voluntary societal ones. Sadly that often degrades quite quickly into rich people buying a clean conscience without actually giving a crap, which is a nice parallel for Google’s FOSS effort.

                                                                                                                                      I do agree with you and personally don’t really care if good charity came from a bad person/party, unless there are nasty strings attached.

                                                                                                                                      Edit: bad wording maybe for “bad”. Nasty strings are t&c but also you can’t buy yourself clean with money from child-trafficking. These terms are too vague and subjective.

                                                                                                                          2. 1

                                                                                                                            The common explanation I’ve heard in left leaning circles is that because the countries are dirt poor, they have to take loans from institutions like the IMF, and those loans have incredibly shitty agreements which basically guarantee that the country remains poor because all of the value generated in that country is shipped over to the rich parts of the world. Many of them, for example, have enough fertile land and water to keep the population fed, but that land and water is instead being used to grow cash crop for the richer countries, which is part of the reason we enjoy cheap T-shirts and coffee. There’s also a lot of other ways the current economic world order kind of screws over the poorer countries; a lot of it is described in the Wikipedia article on neocolonialism.

                                                                                                                            Some people go as far as to claim that capitalism requires an underclass, so in social democracies which try to achieve some degree of equality within the nation, the underclass has to be out-sourced to places like Africa or China. (That certainly seems to be what’s happening, but whether it’s required by the economic system or just a flaw in the current implementation of it is up for debate.)

                                                                                                                            Personally, I find those explanations fairly convincing, and I haven’t heard any good refutations. I’m far from an expert on the topic though, so there may be other, good explanations. My personal guess would be that the reason this topic isn’t discussed that much (at least in non-racist circles) is that we basically have to conclude that the rich parts of the world are responsible for perpetuating the problem, and that acknowledging this and fixing it would be really fucking expensive.

                                                                                                                            1. 1

                                                                                                                              The book The Dictator’s Handbook (summarized in Rules for Rulers) offers another explanation. Foreign aid is a quid pro quo for policy changes. Aid recipients accept the loans and use it to enrich their government’s supporters.

                                                                                                                        2. 1

                                                                                                                          I use an iPhone. … I know that when I use it, Apple is in the driver’s seat, not me. And I resent them for it.

                                                                                                                          Can you say more about the origin of that resentment? I’ve seen versions of this perspective often and I’d like to understand where it comes from.

                                                                                                                          1. 3

                                                                                                                            For me, it’s a feeling of learned helplessness.

                                                                                                                            If I’m using my PinePhone and there’s a problem, it’s usually something I can fix. Even if it means running the onboard diagnostics and ordering a new motherboard (yeah, my WiFi just failed), that’s an intended use case. Sure there’s a binary blob or two involved, and I can’t personally repair surface-mount boards … but to a far greater extent than either an iPhone or an Android phone, it’s my device.

                                                                                                                            Contrast that with, say, an old Samsung phone. Want to upgrade the OS? You’re SOL if Samsung and/or your carrier has stopped shipping updates. Want to root the device, or swap OS? Expect a bunch of software to stop working (think Google Play, and games with overzealous anti-cheat for starters). Want to repair the device? Go buy some specialist tools and cross your fingers … but probably don’t bother, because OS updates aren’t a thing any more anyhow.

                                                                                                                            1. 5

                                                                                                                              but to a far greater extent than either an iPhone or an Android phone, it’s my device.

                                                                                                                              It is your device if you understand and enjoy technology to that extent, and I think this is an important point to drive home. Imagine you have a friend Foo. Foo uses a Mac, but is getting real tired of their Mac constantly telling them they can’t install a piece of software or that some application of theirs can’t read from a directory. Foo hears that all their cool tech friends are on Linux, so maybe Foo should be too. Foo installs a distro, and then tries to plug in two monitors with different DPIs. Big mistake; nothing is scaled properly. Foo searches online and sees references to font scaling, HiDPI support, this thing called Gnome, and other stuff. Foo hops into an online chatroom to ask a question then gets asked what their current Window Manager is. What?? Someone in the chat tells Foo that this is why they never use HiDPI displays, because it’s too much work to configure. What in the world, they just don’t use something because Linux doesn’t support it??

                                                                                                                              Half of my own knowledge of Linux comes from having gotten things to work for Linux. I remember in the mid-2000s when I had to run wpa_supplicant by hand on my wireless adapter and then add in some custom IP routes to make it play well with my router. I learned about ALSA by trying to figure out why my audio doesn’t work on startup (turns out the device changes device IDs on boot, and configs are based on the device ID, how fun). I learned about X11 and Xorg when troubleshooting issues with resolution, compiling display drivers, setting refresh rates, HiDPI, you name it. I learned LPR and CUPS by trying to get my printers to work. For me, this stuff is fun (to an extent, I don’t exactly enjoy having to whip out xrandr when trying to get my laptop to display slides to give a presentation.) But to the average user that is somewhat interested in freedom or configurability, “owning your device” shouldn’t mean having deep expertise in computing to troubleshoot an issue.

                                                                                                                              1. 2

                                                                                                                                It is your device if you understand and enjoy technology to that extent, and I think this is an important point to drive home.

                                                                                                                                Sure, absolutely. I was merely answering the original question from my own perspective, as requested by @kevinc. (Well, to be fair, he didn’t request it from me, but I’m presumptuous like that ;-P ).

                                                                                                                                What in the world, they just don’t use something because Linux doesn’t support it??

                                                                                                                                The irony! I’m posting this from a 1080p external monitor that I bought, at the time, because setting up display scaling on FreeBSD was on my TODO list.

                                                                                                                                1. 1

                                                                                                                                  I did appreciate the bonus perspective. :)

                                                                                                                              2. 3

                                                                                                                                Samsung phones are a really bad example to use, since whatever-replaced-Cyanogen is still supporting the S3 last I checked (which is 11 years old at this point). Since the thing has a replaceable battery, you could reasonably expect to use it as a basic phone years to come (even if the memory is anaemic by modern Android standards).

                                                                                                                                You might have slightly better luck with using Apple in your example, but they’re on a 7-8 year support cycle with OS updates too. Wait 8 years and see if you can still replace your PinePhone’s motherboard. I’d be moderately surprised if Pine64 was still making the board in that time. (I know they have a LTS A64 but I don’t know what, if any, commitments they’ve made re the phone.)

                                                                                                                                1. 2

                                                                                                                                  Samsung phones are a really bad example to use, since whatever-replaced-Cyanogen is still supporting the S3 last I checked (which is 11 years old at this point)

                                                                                                                                  Yeah but Samsung isn’t. And a number of vendors whose software “supports Android” flat out refuses to run on phones with ROMs other than those approved by the manufacturer and carrier.

                                                                                                                                  That some enterprising open-source developers have managed to hack part of the way around the problems posed by this awful ecosystem is great, but it doesn’t diminish the problems, or most of the feelings of helplessness.

                                                                                                                              3. 2

                                                                                                                                Sure. Sacrificing autonomy begets dependence. Dependence begets learned helplessness, which in turn begets dependence, in a vicious cycle. Sometimes there are perfectly good reasons to sacrifice personal autonomy, such as when the needs of the many are in conflict with the needs of the one. A perfect example of that situation is Covid 19 and lockdowns + mask mandates, but that discussion isn’t relevant here. Needless to say, when I feel that some company is constraining my power to make decisions, I turn resentful. When I use an iProduct from Apple, terms and conditions apply. Terms and conditions are those things that the conquering army dictates to a surrendering foe.

                                                                                                                                1. 2

                                                                                                                                  Thanks for elaborating! If I understand, part of the problem is the popular norm of accepting the terms and conditions rather than thinking critically about them. That would lead those who do think critically and opt out to be relatively isolated in an uphill battle. I for one am unhappy with the QWERTY keyboard standard, not there’s anything nefarious about it — it’s just something people don’t think critically about and consider alternatives to. We could have better, but we let inertia win. I don’t really have an entity to be resentful of, but I might if a corporation were behind it.

                                                                                                                            1. 17
                                                                                                                              Joy

                                                                                                                              (…) computers are more than just a means to an end.

                                                                                                                              The problem is that these “modern” computers are also inferior to the older ones in the “getting things done” (and I don’t mean GTD methodology) aspect. Nowadays, you rarely can just sit down to the desktop and do your thing as you wanted to. Even the tools and applications designed to do the thing you wanted, often produce different, not exact results or have many artifical limitations or annoyances which were missing back then.

                                                                                                                              1. 15

                                                                                                                                There’s something that me and a friend of mine grumpily called the WEBA point (from Why Even Bother Anymore) way back. I promise it’s somewhat neat and relevant.

                                                                                                                                Every product line in every industry comes to a point where it’s so good that, if you only take into account what you see in the first five minutes of usage, it might as well be finished. I mean yes there are always bugs to fix and new features to add, from ASLR to sandboxing engines and from new 3D graphics APIs to low-latency audio features, but the “common” parts that everyone uses, the one that virtually all users care about, might as well be done. At that point, it feels like there’s no point in “upgrading” anymore – it’s already good.

                                                                                                                                As a product starts approaching that point, various teams start being pressured into justifying their existence. That’s e.g. how Windows has twenty years’ worth of useless Find File… dialogs in spite of having had a functional, reliable one at one point. Because you can’t just go to your boss and say you know what, boss, the thing we had last time was really good, why ruin a good thing? How about we just fix the bugs we found and add these neat features? How’s your boss going to get a bonus for that?

                                                                                                                                As it goes past that point, the price of coming up with an even better version becomes too high to justify keeping the whole machine around, and companies start looking at alternative revenue streams and packaging solutions. Visible changes are prioritised because they make it easy to sell new versions as fresh, even when they’re half-baked – that’s how we ended up with Windows 10’s design, which is so dysfunctional that it probably caused more of a resurgescence of 1990s nostalgia than the latest Star Trek TV shows.

                                                                                                                                Now there is some inherent conflict to this – I wish my Windows 10 machine had Windows 2000’s functional UI instead of this nondescript thing that mostly consists of whitespace, but I can’t say I miss plugging the network cable, going to the kitchen to microwave a burrito, and coming back to a freshly wormed computer that’s going to automatically shut down in 90 seconds.

                                                                                                                                Overall I guess we’re in a better place. But based on how 2000 looked vs. 1979 back in 2000, I for one hoped we’d have been in a far better place in 2021 :).

                                                                                                                                1. 0

                                                                                                                                  Not gonna go into the whole debate but Windows 10 + ClassicShell is really good in my opinion.

                                                                                                                                  I have used Win since 95 and this is pretty darn good. Stability, speed and performance are off the charts in comparison.

                                                                                                                                  1. 3

                                                                                                                                    Yeah, I mean, technologically, Windows 10 is by far the superior version of Windows. But I personally just can’t stand the user interface… which makes me sad, because Windows 7 is probably my favorite modern operating system.

                                                                                                                                    1. 2

                                                                                                                                      I have a Windows 10 machine I use for work with that setup. It’s definitely good and, as I mentioned above, definitely better than Windows 95 ever was. I don’t wanna go back to that. But as far as the UI goes, not even ClassicShell can save you from a lot of things (flat, thick titlebars need registry hacks to get them to a usable size, and they’re still flat, Settings dialogs are messy and lack.all sorts of things, the Search feature – between marketing-driven crippling and legit difficulty – couldn’t find a document if it were the only thing on the hard drive, can’t use wallpapers with both light and dark colours because the icon labels have no background and so on and so forth).

                                                                                                                                  2. 8

                                                                                                                                    Open Source software mitigates or removes that: GNU Emacs is GNU Emacs in all important respects, not “Visual GNU Emacs .Net++ 2k21.8 Now With Ribbon Bar” or “GNU POWER iMacs Butterfly Keyboard” or whatever it would have turned into by now under the stewardship of some proprietary company obsessed with chasing trends off a cliff.

                                                                                                                                    1. 22

                                                                                                                                      I think this is valid for some very conservative projects like Emacs, but if you look at things like GNOME or Firefox, I think it’s clear that open source rarely succeeds in protecting the user from these things in practice.

                                                                                                                                      1. 4

                                                                                                                                        I’m not sure about “rarely” but I do take your point; I will say, however, that Open Source gives me the option to not use stuff much more effectively than closed source does, like how I can stick with the very stable Window Maker and not have to switch to Gnome and its UI treadmill regardless of which OS and distro I use.

                                                                                                                                        1. 15

                                                                                                                                          It really varies a lot. Like when GNOME changed things in unpopular ways, forks like Cinnamon and Mate happened, because the codebase was comprehensible and maintainable. But when Firefox made unpopular changes, you ended up with Palemoon, which has pretty severe security issues due to the complexity and incomprehensibility of the codebase.

                                                                                                                                          1. 1

                                                                                                                                            Yeah, some changes are inevitable. I also stuck to the very stable WindowMaker for years (some of my patches ended up in the master branch, too) but sooner or later you have to use modern applications – like Firefox – and sooner or later you stumble into GTK3 land and it’s just not worth it anymore.

                                                                                                                                          2. 1

                                                                                                                                            I think this uncovers the important distinction: that the changes occur in free software because they are changes that a community around free software wants, not because they are changes that an executive committee trying to drive services revenue wants. Sometimes those are motivated by commercial reasons, as with “open source version of X” projects where X changes its feature set or interface, but oftentimes they are not, as with emacs not having its butterfly ribbons.

                                                                                                                                            And I get the same from some retrocomputing (or at least old computing) projects. In my Amiga life, I can use a real, old Amiga and know that C= aren’t going to make me subscribe to Commodore One fitness tv music plus just to get new features, because there aren’t new features! But I can also use AROS and know that it isn’t going to pull in a weird direction, and is going to work on my newer computers (obviously Amigas are old enough that I can emulate them on newer computers at better than full speed anyway).

                                                                                                                                          3. 2

                                                                                                                                            Yeah. I don’t see how this has anything at all to do with general development models and/or software licenses.

                                                                                                                                          4. 9

                                                                                                                                            I strongly disagree - it’s just that FLOSS is subject to the same problem via a different set of incentives. jwz famously described it as The CADT Model:

                                                                                                                                            This is, I think, the most common way for my bug reports to open source software projects to ever become closed. I report bugs; they go unread for a year, sometimes two; and then (surprise!) that module is rewritten from scratch – and the new maintainer can’t be bothered to check whether his new version has actually solved any of the known problems that existed in the previous version.

                                                                                                                                            1. 3

                                                                                                                                              From the outside that’s not the impression one gets. For the longest time there was “GNU Emacs” and “lucid Emacs” competing for mindshare; now, it’s “doom Emacs” versus “quake Emacs” or something similar. Vanilla (gnu) Emacs doesn’t project an impression of “finished and ready”: more so a toolbox and an enormous time sink to configure it just right.

                                                                                                                                          1. 17

                                                                                                                                            I disagree on one point:

                                                                                                                                            Quite literally, the only way to use HyperCard is to get a hold of an old Mac – or emulate it, but emulation always falls short of the real deal

                                                                                                                                            Emulation often better than the original. I ran OPENSTEP 4.2 for i486 in a VM for a while. Most 486 systems were a bit underpowered for what OPENSTEP really wanted, but on a 1GHz machine it was amazingly responsive. The emulated display was also higher resolution and with a better colour depth than most contemporary hardware.

                                                                                                                                            Somewhat more obscurely, the best spreadsheet that I’ve used for keyboard navigation is the one that came with the Psion Series 3a (I had a Series 3 with the spreadsheet on a ROM cartridge). There’s a Series 3x emulator for DOS and you can run it in DOSBox. Most of the Psion applications were intended to be portable across the entire range and so didn’t hard-code anything about the screen size. The emulator lets you run at 640x480, whereas the 3a had a 480x160 screen, so you get a huge amount more screen real-estate. And, yes, I do find it a bit depressing how much more useable a spreadsheet in a late ’90s emulator for a mid-’90s platform running on an early 2000s emulator for a late ’80s OS is than anything more recent.

                                                                                                                                            1. 8

                                                                                                                                              If you like that spreadsheet interface, you might enjoy visidata. I recently picked it up because I needed a fast way to deal with very large CSVs, and it reminded me in a very good way of keyboard-driven TUI spreadsheets from that era.

                                                                                                                                              1. 4

                                                                                                                                                I never used TUI spreadsheets but I love Visidata. I like doing data analysis, and Visidata is a fantastic way of doing some easy exploratory analysis at a glance before hammering at the data in my actual environment of choice.

                                                                                                                                                1. 4

                                                                                                                                                  For the record, the Psion UI wasn’t a TUI; it was a proper GUI, albeit keyboard-driven.

                                                                                                                                                  1. 1

                                                                                                                                                    I think I’m the one who made it sound as though it was. I only meant that it felt like a TUI (again, in a very nice way) to me.

                                                                                                                                              2. 3

                                                                                                                                                Hm, in terms of performance and, to a limited degree, usability, I think you’re right. But there’s more to a computer than that.

                                                                                                                                                Most obviously, old user interfaces were designed for CRT monitors, and you just don’t get the same experience out of a flat-screen monitor. Take the screenshot of HyperCard that I included in my blog post, for example. It really doesn’t convey how HyperCard looks on my iMac G3. It looks way too sharp, and the colors are a bit off.

                                                                                                                                                I think you can get a lot of mileage out of emulation, but it completely misses the hardware, which is half of the picture. It can never truly convey what it was like to use the computer.

                                                                                                                                                1. 2

                                                                                                                                                  I think you can get a lot of mileage out of emulation, but it completely misses the hardware

                                                                                                                                                  I’m not sure that’s fair. Emulating hardware perfectly is very hard, but most emulators I’ve experienced are at least trying. On the other hand, some people /want/ crisp and sharp, even if it’s not period accurate. I’ve also seen more than my fare share of poor “scan line” implementations that don’t resemble anything I saw in my time on CRTs.

                                                                                                                                                  1. 1

                                                                                                                                                    No, that’s what I mean. Emulators emulate processors, and they succeed reasonably well at that, but they rarely emulate other aspects of the hardware, such as the mouse, keyboard and monitor. And when they try, they fail, just as you said, because software just can’t emulate certain aspects of the physical world.

                                                                                                                                                  2. 2

                                                                                                                                                    Most obviously, old user interfaces were designed for CRT monitors, and you just don’t get the same experience out of a flat-screen monitor.

                                                                                                                                                    Other than nostalgia, though, why would someone want this? CRTs were terrible. I understand nostalgia as a side gig, I actually own a decent collection of vintage computers (late 70s to early aughts), but staring at (and having to fiddle with) a CRT all day would drive me nuts.

                                                                                                                                                    1. 1

                                                                                                                                                      Well, I’d say it’s not as simple as that. Modern monitors are great for modern operating systems, but I wouldn’t want to use one with OS 9, because the text will look too sharp. Sharper isn’t better – especially if the user interface in question has been designed with lower sharpness in mind. It will arguably be displayed incorrectly on a flat-screen monitor.

                                                                                                                                                      Also, I don’t have the same experience with CRT monitors that you have. As a specific example, the one built into my iMac works fine out of the box, no fiddling required, even twenty years later.

                                                                                                                                                    2. 1

                                                                                                                                                      On the other hand, an old 21” CRT is way cheaper, easier to get your hands on, and generally easier to repair, than an old Mac. The iMac G3 is a bit more recent so it’s not as obvious, I guess, but when it comes to ‘90s-era software and earlier, there are parts of the experience – like 30 seconds’ worth of disk thrashing – that you only miss for a few minutes.

                                                                                                                                                      (Edit: albeit, I’ll give you that, 30 seconds’ worth of disk thrashing brings me great joy on a Saturday evening :-) )

                                                                                                                                                      1. 5

                                                                                                                                                        Yes, I generally agree. A good thing about buying 80s/90s computers today, though, is that you can get a hold of things that would have been far out of your price range at the time. You don’t have to settle for the average of whichever time period you’re interested in.

                                                                                                                                                        1. 2

                                                                                                                                                          Based on your blog I think you already know that, but any outlooks may be disappointed to know that it depends a little on what your hobby is now that retrogaming is a big business. A while back, I don’t know if it’s still the case, C64s sold for outrageous prices – hell, C64 parts sold for outrageous prices – despite not really beings collectors’ items.

                                                                                                                                                          Things are a little better for x86 beige boxes though, yeah, the kind of systems I was drooling over in magazine ads can now be acquired for slightly pricier than average peanuts :-).

                                                                                                                                                          1. 2

                                                                                                                                                            Yes, that is definitely the case. I’m also lucky to be interested in the more boring 90s computers! :-)

                                                                                                                                                            1. 2

                                                                                                                                                              It’s not just C64s anymore. Apple IIs are regularly going for multiple hundreds of dollars. You can’t get a VIC-20 for less than $100! The only cheap things from the 80s are relatively unloved boxes, like TI-99A and Timex Sinclairs (American ZX-81). Even those are rising faster than inflation.

                                                                                                                                                              90s stuff is all expensive now, too because of the cap plague, and the CMOS batteries all leaking and destroying parts and PCBs.

                                                                                                                                                            2. 2

                                                                                                                                                              Just note the inevitable tension between retrocomputing for an authentic period experience, and retrocomputing as a romantic exercise in self delusion about what the past might have been.

                                                                                                                                                              I have a Pentium 3 for reminiscing about a late 90s PC experience. The device is a few years newer than truly late 90s, and it has a 2005 GPU which means it can run NT 4 in 1080p on a flat panel, which people didn’t do in the 90s.

                                                                                                                                                              Recently I thought it died, and looking on eBay, $100 now gets a drop-in replacement board with TWO processors. It may be cheap and available, but it just increases the gap between my fantasy 90s and the real 90s.

                                                                                                                                                        2. 1

                                                                                                                                                          If you have a link on how to run Psion stuff under emulation I think my dad (hard core Psion fan) would be very interested!

                                                                                                                                                          He most likes the “freeform” database application which doesn’t seem to have been ported/replicated in later software.

                                                                                                                                                          1. 4

                                                                                                                                                            I think this is the emulator I’m using. It works great under DOSBox (just unzip it into the DOSBox shared directory and run it). There’s a .ini file that tells it what screen resolution to use. I think it will go up to whatever DOSBox is using, but I’ve not trued it above 640x480. You can also use DOSBox’s scaling to give a bigger window for DOSBox on a more modern monitor.

                                                                                                                                                        1. 12

                                                                                                                                                          This is morally equivalent to invoking gdb against yourself. All bets are off. Cute! :)

                                                                                                                                                          1. 6

                                                                                                                                                            Opening files should be considered unsafe.

                                                                                                                                                            1. 16

                                                                                                                                                              This was actually half-seriously proposed, and the conclusion of the conversation was basically what’s described in the blog post: some things are outside of our capacity to reasonably model, and making something like opening files unsafe would be so unwieldy as to get in the way of people trying to use Rust productively.

                                                                                                                                                              Any formal model proves things only under the assumptions of the system, and usually one of those assumptions is “memory doesn’t just change from external forces.” If you violate that (this trick / cosmic rays or hardware failure flipping a bit / Rowhammer) then you can do things the model claims to prevent. However, these events are unlikely, so you still get strong, high-assurance conclusions from models like Rust’s various safety mechanisms.

                                                                                                                                                              1. 5

                                                                                                                                                                Yes. I just opened an issue on the Monte reference interpreter to discuss it. Because Monte is a capability-aware language, we should expect the runtime to help us here.

                                                                                                                                                                In Monte, opening files is somewhat unsafe. Specifically, we only allow opening files through an “entrypoint capability”; these are special objects that are passed to the entrypoint where a program begins execution. The capability to open files is hopefully distinct from the capability to walk the current process’s heap, and currently they have two different names (makeFileResource and currentProcess).

                                                                                                                                                                1. 2

                                                                                                                                                                  A program might have been invoked with a copy of gdb attached to its stdout. A program might print “please attach gdb to this process and type in the following commands…” on a terminal. ;)

                                                                                                                                                                2. 5

                                                                                                                                                                  That was once very useful to get a backtrace on crash on an unusual system.

                                                                                                                                                                  Starting gdb at process start and writing the relevant commands to its stdin in the SEGV handler worked a treat.

                                                                                                                                                                  1. 3

                                                                                                                                                                    You jest but just wait a few more years and you’ll find this in commercial codebases around the world. Sooner or later someone will just need to do a damn typecast and be done with it, because clean code is important but we have to provide value for our customers first and foremost or something like that.

                                                                                                                                                                    1. 5

                                                                                                                                                                      Did you read the code? Someone lazy would just use actual unsafe code. Or if worst comes worst they might manually muck with pointers to change an enum discriminant in a strange way. But opening their own process memory map as a file and fiddling with that directly? That’s just bananas.

                                                                                                                                                                      Files aren’t unsafe in Rust. The author has no other reason for taking this approach. So I highly doubt opening /proc/self/mem will become common in commercial codebases, given how trivially you can execute this same approach within an unsafe block.

                                                                                                                                                                      I get the cynicism about software quality, especially in commercial software. But I just think this one is a stretch.

                                                                                                                                                                      1. 1

                                                                                                                                                                        Someone lazy would indeed just use actual unsafe code which is why Rustverity, when such a thing will exist, expensive consultants and all, is going to pop up a big red warning about unsafe code and eventually the people who write it will have uncomfortable meeting with their manager (or worse, their manager will have uncomfortable meetings with their manager about how our codebase is unsafe). So if you’re lazy, that will be a no-go. Importing the totally_safe_transmute crate and using it, on the other hand, won’t be a problem.

                                                                                                                                                                        There’s an old saying that determined FORTRAN programmers can write FORTRAN in any language, but it holds true for a lot of languages, including C++ :-).

                                                                                                                                                                        Of course this is mostly cynicism but hey, that way, I’m rarely disappointed!

                                                                                                                                                                        1. 4

                                                                                                                                                                          Don’t worry, totally-safe-transmute crate will get permanently unfixed advisory in RustSec database and cargo-audit will pop up a big red warning. Rust people do care about these issues.

                                                                                                                                                                  1. 8

                                                                                                                                                                    In the Arduino world, everything is done in C++, a language which is almost never used on 8-bit microcontrollers outside of this setting because it adds significant complexity to the toolchain and overhead to the compiled code.

                                                                                                                                                                    I don’t buy this. C++ is C with extra features available on the principle that you only pay for what you use. (The exception [sic] being exceptions, which you pay for unless you disable them, which a lot of projects do.)

                                                                                                                                                                    The main feature is classes, and those are pretty damn useful; they’re about the only C++ feature Arduino exposes. There is zero overhead to using classes unless you start also using virtual methods.

                                                                                                                                                                    The C++ library classes will most definitely bloat your code — templates are known for that — but again, you don’t have to use any of them.

                                                                                                                                                                    (Aside: can someone explain why anyone’s still using 8-bit MCUs? There are so many dirt cheap and low-power 32-bit SoCs now, what advantage do the old 8-but ones still have?)

                                                                                                                                                                    1. 9

                                                                                                                                                                      (Aside: can someone explain why anyone’s still using 8-bit MCUs? There are so many dirt cheap and low-power 32-bit SoCs now, what advantage do the old 8-but ones still have?)

                                                                                                                                                                      They’re significantly cheaper and easier to design with (and thus less pretentious in terms for layout, power supply parameters, fabrication and so on). All of these are extremely significant factors for consumer products, where margins are extremely small and fabrication batches are large.

                                                                                                                                                                      Edit: as for C++, I’m with the post’s author here – I’ve seen it used on 8-bit MCUs maybe two or three times in the last 15 years, and I could never understand why it was used. If you’re going to use C++ without any of the ++ features except for classes, and even then you still have to be careful not to do whatever you shouldn’t do with classes in C++ this year, you might as well use C.

                                                                                                                                                                      1. 3
                                                                                                                                                                        • RAII is a huge help in ensuring cleanup of resources, like freeing memory.
                                                                                                                                                                        • Utilities like unique_ptr help prevent memory errors.
                                                                                                                                                                        • References (&) aren’t a cure-all for null-pointer bugs, but they do help.
                                                                                                                                                                        • The organizational and naming benefits of classes, parameter overloading and default parameters are significant IMO. stream->close() vs having to remember IOWriteStreamClose(stream, true, kDefaultIOWriteStreamCloseMode).
                                                                                                                                                                        • As @david_chisnall says, templates can be used (carefully!) to produce super optimized type-safe abstractions, and to move some work to compile-time.
                                                                                                                                                                        • Something I only recently learned is that for (x : collection) even works with C arrays, saving you from having to figure out the size of the array in more-or-less fragile ways.
                                                                                                                                                                        • Forward references to functions work inside class declarations.

                                                                                                                                                                        I could probably keep coming up with benefits for another hour if I tried. Any time I’m forced to write in C it’s like being given those blunt scissors they use in kindergarten.

                                                                                                                                                                        1. 2

                                                                                                                                                                          The memory safety/RAII arguments are excellent generic arguments but there are extremely few scenarios in which embedded firmware running on an 8-bit MCU would be allocating memory in the first place, let alone freeing it! At this level RAII is usually done by allocating everything statically and releasing resources by catching fire, and not because of performance reasons (edit: to be clear, I’ve worked on several projects where no code that malloc-ed memory would pass the linter, let alone get to a code review – where it definitely wouldn’t have passed). Consequently, you also rarely have to figure out the size of an array in “more-or-less fragile ways”, and it’s pretty hard to pass null pointers, too.

                                                                                                                                                                          The organisational and naming benefits of classes & co. are definitely a good non-generic argument and I’ve definitely seen a lot of embedded code that could benefit from that. However, they also hinge primarily on programmer discipline. Someone who ends up with IOWriteStreamClose(stream, true, kDefaultIOWriteStreamCloseMode) rather than stream_close(stream) is unlikely to end up with stream->close(), either. Also, code that generic is pretty uncommon per se. The kind of code that runs in 8-16 KB of ROM and 1-2 KB of RAM is rarely so general-purpose as to need an abstraction like an IOWriteStream.

                                                                                                                                                                          1. 2

                                                                                                                                                                            I agree that you don’t often allocate memory in a low-end MCU, but RAII is about resources, not just memory. For example, I wrote some C++ code for controlling an LED strip from a Cortex M0 and used RAII to send the start and stop messages, so by construction there was no way for me to send a start message and not send an end message in the same scope.

                                                                                                                                                                            1. 1

                                                                                                                                                                              That’s one of the neater things that C++ allows for and I liked it a lot back in my C++ fanboy days (and it’s one of the reasons why I didn’t get why C++ wasn’t more popular for these things 15+ years ago, too). I realise this is more in “personal preferences” land so I hope this doesn’t come across as obtuse (I’ve redrafted this comment 3 times to make sure it doesn’t but you never know…)

                                                                                                                                                                              In my experience, and speaking many years after C++-11 happened and I’m no longer as enthusiastic about it, using language features to manage hardware contexts is awesome right up until it’s not. For example, enforcing things like timing constraints in your destructors, so that they do the right thing when they’re automatically called at the end of the current scope no matter what happens inside the scope, is pretty hairy (e.g. some ADC needs to get the “sleep” command at least 50 uS after the last command, unless that command was a one-shot conversion because it ignores commands while it converts, in which case you have to wait for a successful conversion, or a conversion timeout (in which case you have to clear the conversion flag manually) before sending a new command). This is just one example but there are many other pitfalls (communication over bus multiplexers, finalisation that has to be coordinated across several hardware peripherals etc.)

                                                                                                                                                                              As soon as you meet hardware that wasn’t designed so that it’s easy to code against in this particular fashion, there’s often a bigger chance that you’ll screw up code that’s supposed to implicitly do the right thing in case you forget to “release” resources correctly than that you’ll forget to release the resources in the first place. Your destructors end up being 10% releasing resources and 90% examining internal state to figure out how to release them – even though you already “know” everything about that in the scope at the end of which the destructor is implicitly called. It’s bug-prone code that’s difficult to review and test, which is supposed to protect you against things that are quite easily caught both at review and during testing.

                                                                                                                                                                              Also, even when it’s well-intentioned, “implicit behaviour” (as in code that does more things than the statements in the scope you’re examining tell you it does) of any kind is really unpleasant to deal with. It’s hard to review and compare against data sheets/application notes/reference manuals, logic analyser outputs and so on.

                                                                                                                                                                              FWIW, I don’t think this is a language failure as in “C++ sucks”. I’ve long come to my senses and I think it does but I don’t know of any language that easily gets these things right. General-purpose programming languages are built to coordinate instruction execution on a CPU, I don’t know of any language that allows you to say “call the code in this destructor 50us after the scope is destroyed”.

                                                                                                                                                                      2. 7

                                                                                                                                                                        While you can of course can put a 32 bit SoC on everything, in many cares 8 bitters are simpler to integrate into the hardware designs. A very practical point, is that many 8 bitters are still available in DIP which leads to easier assembly of smaller runs.

                                                                                                                                                                        1. 5

                                                                                                                                                                          Aside: can someone explain why anyone’s still using 8-bit MCUs? There are so many dirt cheap and low-power 32-bit SoCs now, what advantage do the old 8-but ones still have?

                                                                                                                                                                          They’re dirt cheaper and lower power. 30 cents each isn’t an unreasonable price.

                                                                                                                                                                          1. 3

                                                                                                                                                                            You can get Cortex M0 MCUs for about a dollar, so the price difference isn’t huge. Depending on how many units you’re going to produce, it might be insignificant.

                                                                                                                                                                            It’s probably a question of what you’re used to, but at least for me working with a 32 bit device is a lot easier and quicker. Those development hours saved pay for the fancier MCUs, at least until the number of produced units gets large. Fortunately most of our products are in the thousands of units…

                                                                                                                                                                            1. 9

                                                                                                                                                                              a 3x increase in price is huge if you’re buying lots of them for some product you’re making.

                                                                                                                                                                              1. 4

                                                                                                                                                                                Sure, but how many people buying in bulk are using an Arduino (the original point of comparison)?

                                                                                                                                                                                1. 2

                                                                                                                                                                                  I mean, the example they gave was prototyping for a product..

                                                                                                                                                                              2. 6

                                                                                                                                                                                If you’re making a million devices (imagine a phone charger sold at every gas station, corner store, and pharmacy in the civilized world), that $700k could’ve bought a lot of engineer hours, and the extra power consumption adds up with that many devices too.

                                                                                                                                                                              3. 2

                                                                                                                                                                                The license fee for a Cortex M0 is 1¢ per device. The area is about the size of a pad on a cheap process, so the cost both of licensing and fabrication is pretty much as close to the minimum cost of producing any IC.

                                                                                                                                                                                1. 1

                                                                                                                                                                                  The license fee for a Cortex M0 is 1¢ per device.

                                                                                                                                                                                  This (ARM licensing cost) is an interesting datapoint I have been trying to get for a while. What’s your source?

                                                                                                                                                                                  1. 2

                                                                                                                                                                                    A quick look at the Arm web site tells me I’m out of data. This was from Arm’s press release at the launch of the Cortex M0.

                                                                                                                                                                                    1. 1

                                                                                                                                                                                      Damn. Figures.

                                                                                                                                                                                2. 1

                                                                                                                                                                                  Could you name a couple of “good” 8-bit MCUs? I realized it’s been a while since I looked at them, and it would be interesting to compare my preferred choices to what the 8-bit world has to offer.

                                                                                                                                                                                3. 2

                                                                                                                                                                                  you only pay for what you use

                                                                                                                                                                                  Unfortunately many arduino libraries do use these features - often at significant cost.

                                                                                                                                                                                  1. 2

                                                                                                                                                                                    I’ve not used Arduino, but I’ve played with C++ for embedded development on a Cortex M0 board with 16 KiB of RAM and had no problem producing binaries that used less than half of this. If you’re writing C++ for an embedded system, the biggest benefits are being able to use templates that provide type-safe abstractions but are all inlined at compile time and end up giving tiny amounts of code. Even outside of the embedded space, we use C++ templates extensively in snmalloc, yet in spite of being highly generic code and using multiple classes to provide the malloc implementation, the fast path compiles down to around 15 x86 instructions.

                                                                                                                                                                                  1. 3

                                                                                                                                                                                    I wonder how big the error would be if we assumed Pi to be exactly 3

                                                                                                                                                                                    1. 1

                                                                                                                                                                                      Okay, so on duck duck go: “pi * 25000000000” results in 78539816339.7 And when setting pi to 3 we get 75000000000

                                                                                                                                                                                      So this is a pretty big error: 3539816339.699997

                                                                                                                                                                                      Did I calculate this correctly?

                                                                                                                                                                                      1. 4

                                                                                                                                                                                        You calculated correctly, but the answer to “how big the error would be” is it depends :). There are two ways to look at the error of a measurement or a computation. The first one is the one you tried above – the absolute magnitude – which tells you half the picture.

                                                                                                                                                                                        The other half is the relative error. The difference between 3 and 3.141592 is 0.141592 or about 4.5%. A 4.5% error is good enough for some things (e.g. weighing scales at the farmers market) and really bad for some other things (e.g. high-precision scales for the chemical industry).

                                                                                                                                                                                        Both of these tell you useful things – you generally want to keep both in mind when making any kind of assessment.

                                                                                                                                                                                        (Edit: oh yeah, one other interesting thing. Relative error is adimensional, it’s just a percentage. Absolute error has whatever unit you’re measuring, and sometimes that tells you a bit about how big an error is in physical terms. In your case, if we’re talking, say, 3 539 816 339 molecules of water, that’s a tiny fraction of a water drop. If we’re talking 3 539 816 339 grains of rice, that’s about 410,000 cups of rice which is, like, a lot of rice).

                                                                                                                                                                                        1. 2

                                                                                                                                                                                          Ah! That’s good to know thanks :) So generally I would want to additionally examine the relative error and see how it relates to the absolute magnitude between two values.