Threads for toromtomtom

    1. 2

      I think one important point is that a debugger will only ever help you to find symptoms of a bug, not the bug itself. To find the bug and ultimately fix it, thinking through the code is always necessary. I believe that this is the meaning behind that Kernighan quote in the article.

    2. 5

      I haven’t used Haskell but it’s baffling to me that A) there are multiple proposals for breaking changes after all these years and B) they’re spread out across multiple releases! At least get all your breaking changes done at once so people can fix things and move on instead of this constant drip of breakage. Just looks really weird from the outside. Maybe it’s not so bad if you have more context.

      1. 6

        As a Haskell user, none of these changes are a big deal.


        Ongoing: Word8#

        Code will use less memory. This is probably a game changer for folks with a lot of ffi code to c, but otherwise this won’t affect most people.

        Upcoming: Remove Control.Monad.Trans.List

        I didn’t even know this existed. List is already a monad, so why would anybody ever import this?

        Upcoming: Remove Control.Monad.Trans.Error

        This module has had a big deprecation warning on it for like 3 years telling you to use ExceptT.

        Upcoming: Monomorphise Data.List

        Yeah, I thought it was weird these functions were polymorphic. This is a good change. Probably won’t break my code because I imported Foldable/Traversable when I wanted the polymorphic variants, but some people might have to change imports after this.

        Planned: forall becomes a keyword

        If you named your variables “forall” then you are in a tiny group of people affected by this.

        Planned: remove return from Monad

        Presumably the top level definition is still in the Prelude module, so no big deal. This only affects people who override it incorrectly and break the laws it obeys.

        Planned: remove mappend from Monoid

        Ditto. This only affects people who wanted to overload an optimized version, which probably isn’t common.

        Planned: remove (/=) from the the Eq class

        Ditto. This only affects people who override it incorrectly.

        Planned: disable StarIsType by default

        This disables a rarely used kind-level identifier which looks like an operator, but isn’t. Instead you use a readable identifier. That’s a good thing. It probably won’t affect much of my code.

        1. 4

          This.

          I get more breaking changes from template-haskell than from anywhere else in The Language or base.

      2. 3

        I feel like it’s nicer when you just handle breaking changes one by one over time. Python 3 was the “break everything at once” thing, where you just get hit with a deluge of things all at once.

      3. 2

        It kind of fits Haskell’s slogan of “avoid success at all costs”. It seems that the people behind Haskell prioritize the inherent purity of the language over stability. I actually sympathize with this way of thinking, but that is easy for me to say, as I am not using Haskell in any serious way.

      4. 1

        One of Haskell’s claims to fame is its ease of refactoring so this is really just showboating.

        “Hey, look it’s so easy to refactor in Haskell, we are going to break the language all the time.”

    3. 2

      If you know German, Schönfinkel’s 1924 paper is a fascinating read.

      1. 2

        Nice tip, thanks.

        I googled it and it is available in English by the name “on the building blocks of mathematical logic”. Quite an interesting read indeed.

        I love when I can get a historic perspective on mathematics.

        I bumped into Phil Wadler in a talk and asked him where did type theory come from. He was genuinely interested when I asked if it came from Russell. And that’s Wadler - he’s forgotten more about type theory than I can ever hope to learn.

    4. 1

      How is FreeBSD’s Linux emulation these days? Is anyone running Linux-based Docker/OCI containers on FreeBSD in production (without running Linux itself through virtualization)?

      1. 4

        It’s improved a lot in 13. It doesn’t support seccomp-bpf though, so you can’t run the Linux container management programs. It’s probably good enough to run a lot of Linux containers in jails, but the orchestration code isn’t there yet.

      2. 3

        Stupid not a FreeBSD user question: Why would you do that? Aren’t jails the moral equivalent? Or would one want to run Docker for simple software distribution convenience purposes?

        1. 4

          Docker / OCI containers in common use conflate a bunch of things:

          • A way of distributing a self-contained userspace thing.
          • A way of building a self-contained userspace thing using layers of filesystem overlay.
          • A way of orchestrating local deployment of self-contained userspace things.
          • A way of isolating self-contained userspace things.

          Jails provide the fourth of these (in a significantly lower-overhead way than the horrible mess of cgroups and seccomp-bpf on Linux), but they don’t provide any of the other bits. Between ZFS and jails, FreeBSD has great mechanisms for running container-like things, but doesn’t yet have good tooling for building and deploying them.

          The containerd port and runj program linked by @kwait are likely to end up with the right things here. That should make it possible to build layers, package them up and deploy them on FreeBSD. The bit that isn’t currently getting any investment is making runj able to deploy containers that are packaged as Linux binaries running on the FreeBSD Linux ABI layer.

          1. 2

            There is also Bastille, which looks pretty nice. IIUC it builds on FreeBSD jails and takes care of the distribution and deployment aspect (your first point).

          2. 2

            in a significantly lower-overhead way than the horrible mess of cgroups and seccomp-bpf on Linux

            As I understand it, the main Linux facility for this kind of isolation is namespaces. I’m not sure how seccomp-bpf found its way into container tools, but presumably “for extra security”.

            Namespaces should have the same kind of overhead (basically none) as jails. The main difference is that namespace API is additive (you tell it “isolate PIDs” then “isolate the networking” and so on, building the sandbox piece by piece) while the jails API is subtractive (you kinda just start with a package deal of full isolation, but you can opt out of some parts – set the FS root to /, or the networking to the host stack). Namespaces are more flexible, but much harder to use securely.

        2. 3

          It would be nice to be able to run the FreeBSD kernel, to have ZFS without entering a licensing gray area if nothing else, while being able to take advantage of both all the software available for Linux and the accumulated tooling and practices around Docker (especially when it comes to building container images). Since a Docker image is basically a JSON manifest and a bunch of tarballs, maybe it wouldn’t be too hard to write a tool that could fetch and unpack a Docker image and run it in a FreeBSD jail.

            1. 2

              That is really, really rad. I need this in my life.

              1. 4

                runj is my project! It’s nice to see other folks excited about it. There’s some previous discussion here. No Linux support yet; I’m focusing on a FreeBSD userland first.

          1. 3

            Given that zfs is now deployed on millions of Ubuntu installs around the world I’m not sure how much weight I’d place on said gray area.

            YMMV however.

            1. 2

              I just never understood this. The CDDL is not compatible with the GPL, and this prevents ZFS being part of the same code, though it can be installed as an external module, and from what I understand ZFS on Linux works fine.

              What’s the legal grey area here? How is this different from installing an MIT (or any other GPL-incompatible) licensed project on your Ubuntu machine?

              1. 5

                As I understand it (I am not a lawyer, this is not legal advice), the issue comes from a bunch of different things:

                First, the GPL says that any software derived from GPL’d software must impose the conditions of the GPL on the combined work and may not impose any additional conditions. This is usually paraphrased as saying that it must be GPL’d, but that’s not actually the case. It’s fine to ship a BSD-licensed file as part of a GPL’d project. The GPL also talks about ‘mere aggregation’. Just distributing two programs on the same medium is explicitly excluded from the GPL, but linking them may trigger the license terms.

                Second, there’s a bit of a grey area about exactly what the GPL applies to in Linux. Linux is distributed under GPLv2, but the source tree includes a note written by Linus (which is not part of the license and not written as a legal document) that the GPL obviously doesn’t apply across the system call boundary. Some internal kernel symbols are also expose as public non-GPL-propagating symbols but that is not actually part of the license. To make this more fun, some bits of code in the kernel were released as GPL’d code elsewhere and then added to the Linux kernel and so it’s possible for the copyright holders of this code to assert that they don’t believe that these exemptions apply to their code. This is somewhat moot for ZFS because uses GPL-tainted kernel symbols.

                Third, the GPL is a distribution license. This means that there are only two things that it can prevent you from doing:

                • Distributing a GPL’d project
                • Distributing something that is a derived work of a GPL’d project.

                Typically, the work-around that companies such as nVidia use is to write a driver that is developed completely independently of the Linux kernel and is therefore not a derived work of the Linux kernel, then write a shim layer that is a derived work of the Linux kernel and is able to load their non-GPL’d driver. They cannot distribute the two together (because the GPL would kick in and prevent distribution of a thing where the combined work does not grant all of the permissions found in the GPL), but they can distribute their own code (they own it) and the shim (by itself, it is GPL compliant). A customer can then acquire both and link them together: the GPL is explicitly not a user license, once you have received the code you are free to use it in any way, including linking it with things where you are not permitted to release the result).

                So using ZFS on Linux is fine, the tricky bit is how you distribute the the CDDL’d component and the Linux kernel together.

                My general view of Linux legal questions is that the vast majority of users are doing something that could be regarded as a violation of the license but no one with standing to sue has any incentive to torpedo the ecosystem.

                1. 1

                  Thanks for the detailed answer. This clears it up. 🙏

    5. 4

      So? One bad coder committed some bad code for money to FreeBSD Head. Other people saw how bad that code really was and tried to fix it, though there wasn’t much time to fix it for the next release. Doesn’t it look like a good case of code review? At least we know there are people looking at those committed code.

      It looks like a bad practice for a for-profit company to contract a third person to port some code without informing any original developers. In this case the company picked a wrong person.

      1. 20

        The point of code review is to prevent bad code from ending up in the tree in the first place. If you have bad code in your master branch and just barely avoided shipping a release with the bad code, that’s not a successful application of code review.

        I agree that a lot of the blame ends up on Netgate here. Calling the discussion of the bad code a “zero-day disclosure” is especially egregious.

        1. 5

          a lot of the blame ends up on Netgear here

          The company in question is “Netgate”. Unless there’s been a very stealthy acquisition, the two are not related at all.

          1. 3

            Sorry, I misremembered from the article. Fixed.

      2. 14

        Doesn’t it look like a good case of code review?

        Not even a tiny bit. It was in a release candidate, and when the code review occurred and found problems, the sponsor of the code accused the developers who reviewed and tried to fix the code of releasing 0-days and worse, while at the same time claiming out of the other side of his mouth that the problems they found weren’t real anyway.

        It is an example of code review narrowly preventing something that was grossly unfit for purpose from landing in a final release as opposed to only a release candidate.

        I can’t find a way to stretch this so that it looks like a “good case” of anything at all.

        1. 1

          I wonder how much such a release candidate is actually tested. The article mentions that there are bug reports regarding the if_wg code in the pfSense project. The same bugs should have occurred in the FreeBSD release candidate.

      3. 6

        One of the points made by the author (and also re-iterated on a podcast he co-hosts) is that there is no formal code review process to prevent something like this from happening again. The other co-host on the podcast made the point that FreeBSD developers are for the most part unpaid volunteers who have no way of forcing anyone to do the reviews. I think both are valid points.

    6. 3

      The article has a section titled “Why are parser combinators useful?”, but I still don’t get why, comparing to the alternative that is parser generators. Is the advantage that it is compositional?

      1. 3

        Another advantage of parser combinators is that you can use them without leaving your programming language. You can also add new combinators on the fly.

        One advantage of parser generators over parser combinators is their ability to analyze the input grammar. Parser generators will stop you from building a parser that might recurse infinitely (e.g., when using left-recursive rules in a top-down parser), whereas parser combinators won’t.

      2. 4

        Yeah, the article is a big mess. The advantages of parser combinators is the compositionality, as you said, which in turn makes them easy to build and reason about. It’s also very elegant to use them because you can return lists of interpretations (and an empty list if the parser failed to parse the data), but that is pretty advanced.

        As so often with functional approaches, though, the downside is that you quickly end up with huge call-stacks.

    7. 2

      Regarding static site generators: Any recommendations for themes or plugins for Jekyll that are compatible with minimalism?

      1. 2

        I’ve used Hyde before – it’s reasonably lightweight (“About” page transfers ~30KB), and looks good on desktop and mobile (it’s “responsive”).

    8. 2

      For characterizing what a compiler actually is, it might be helpful to remember that compilers themselves have to be written (and, potentially compiled) in some language. Here is a blog post on that topic.

    9. 18

      What this rant does not focus on: It’s a good thing that these usecases are broken. Wayland prohibits your desktop applications from capturing keystrokes or recording other apps’ screens by default. X’s security model (and low level graphics APIs) is/are severely outdated, and Wayland promises not only to be more secure, but also expose cleaner APIs at the lower level (rendering, etc.)

      These usecases are/will still be supported though, but this time via standardized interfaces, many of which already exist and are implemented in today’s clients.

      X is based on a 30 year old code base and an outdated model (who runs server-side display servers these days?). Of course, switching from X to Wayland will break applications, and until they are rewritten with proper Wayland support they will stay that way. For most X11 apps there even is Xwayland, which allows you to run X11 apps in Wayland if you must.

      1. 28

        What this rant does not focus on: It’s a good thing that these usecases are broken

        You should have more compassion for users and developers who have applications that have worked for decades, are fully featured, and are being asked to throw all of that away. For replacements that are generally very subpar. With no roadmap when party will be reached. For a system that does not offer any improvements they care about (you may care about this form of security, not everyone does).

        I could care less about whether when I run ps I see Xorg or wayland. And I doubt that most of the people who are complaining really care about x vs wayland. They just don’t want their entire world broken for what looks to them like no reason at all.

        1. 5

          I’m not saying that those apps should be thrown away immediately. Some of these work under XWayland (I sometimes stream using OBS and it records games just fine).

          If your application really does not run under XWayland, then run an X server! X is not going to go away tomorrow, rather it is being gradually replaced.

          I’m simply explaining that there are good reasons some applications don’t work on Wayland. I’m a bit sore of hearing “I switched to Wayland and everything broke” posts: Look behind the curtain and understand why they broke.

      2. 17

        I’m kind of torn on the issue.

        On the one hand, the X security model is clearly broken. Like the UNIX security model, it assumes that every single application the user wants to run is 100% trusted. It’s good that Wayland allows for sandboxing, and “supporting the use cases, but this time via standardized interfaces” which allow for a permission system sounds good.

        On the other hand, there’s clearly no fucking collaboration between GNOME and the rest of the Wayland ecosystem. There’s a very clear rift between the GNOME approach which uses dbus for everything and the everything-else approach which builds wayland protocol extensions for everything. There doesn’t seem to be any collaboration, and as a result, application authors have to choose between supporting only GNOME, supporting everything other than GNOME, or doing twice the work.

        GNOME also has no intention of ever supporting applications which can’t draw their own decorations. I’m not opposed to the idea of client-side decorations, they’re nice enough in GTK applications, but it’s ridiculous to force all the smaller graphics libraries which just exist to get a window on the screen with a GL context - like SDL, GLFW, GLUT, Allegro, SFML, etc - to basically reimplement GTK just to show decorations on GNOME on Wayland. The proposed solution is libdecorations, but that seems to be at least a decade away from providing a good, native-feeling experience.

        This isn’t a hate post. I like Wayland and use Sway every day on my laptop. I like GNOME and use it every day on my desktop (though with X because nvidia). I have written a lot of wayland-specific software for wlroots-based compositors. But there’s a very clear rift in the wayland ecosystem which I’m not sure if we’ll ever solve. Just in my own projects, I use the layer-shell protocol, which is a use-case GNOME probably won’t ever support, and the screencopy protocol, which GNOME doesn’t support but provides an incompatible dbus-based alternative to. I’m also working on a game which uses SDL, which won’t properly support GNOME on Wayland due to the decorations situation.

        1. 13

          the X security model is clearly broken

          To be honest I feel the “brokenness” of the security model is vastly overstated. How many actual exploits have been found with this?

          Keyloggers are a thing, but it’s not like Wayland really prevents that. If I have a malicious application then I can probably override firefox to launch something that you didn’t intend (via shell alias, desktop files) or use some other side-channel like installing an extension in ~/.mozilla/firefox, malicious code in ~/.bashrc to capture ssh passwords, etc. Only if you sandbox the entire application is it useful, and almost no one does that.

          1. 10

            This isn’t a security vulnerability which can be “exploited”, it’s just a weird threat model. Every single time a user runs a program and it does something to their system which they didn’t want, that’s the security model being “exploited”.

            You might argue that users should never run untrusted programs, but I think that’s unfair. I run untrusted programs; I play games, those games exist in the shape of closed-source programs from corporations I have no reason to trust. Ideally, I should be able to know that due to the technical design of the system, those closed source programs can’t listen to me through my microphone, can’t see me through my webcam, can’t read my keyboard inputs to other windows, and can’t see the content in other windows, and can’t rummage through my filesystem, without my expressed permission. That simply requires a different security model than what X and the traditional UNIX model does.

            Obviously Wayland isn’t enough on its own, for the reasons you cite. A complete solution does require sandboxing the entire application, including limiting what parts of the filesystem it can access, which daemons it can talk to, and what hardware it can access. But that’s exactly what Flatpak and Snaps attempts to do, and we can imagine sandboxing programs like Steam as well to sandbox all the closed source games. However, all those efforts are impossible as long as we stick with X11.

            1. 3

              Every single time a user runs a program and it does something to their system which they didn’t want, that’s the security model being “exploited”.

              If you think a permission system is going to solve that, I going to wish you good luck with that.

              Ideally, I should be able to know that due to the technical design of the system, those closed source programs can’t listen to me through my microphone, can’t see me through my webcam, can’t read my keyboard inputs to other windows, and can’t see the content in other windows, and can’t rummage through my filesystem, without my expressed permission.

              Ah yes, and those closed-source companies will care about this … why exactly?

              They will just ask for every permission and won’t run otherwise, leaving you just as insecure as before.

              But hey, at least you made the life of “trustworthy” applications worse. Good job!

              But that’s exactly what Flatpak and Snaps attempts to do […]

              Yes, letting software vendors circumvent whatever little amount of scrutiny software packagers add, that will surely improve security!

              1. 7

                If you think a permission system is going to solve that, I going to wish you good luck with that.

                It… will though. It’s not perfect, but it will prevent software from doing things without the consent of the user. That’s the goal, right?

                You may be right that some proprietary software vendors will just ask for every permission and refuse to launch unless given those permissions. Good. That lets me decide between using a piece of software with the knowledge that it’ll basically be malware, or not using that piece of software.

                In reality though, we don’t see a lot of software which takes this route from other platforms which already have permission systems. I’m not sure I have ever encountered a website, Android app or iOS app which A) asked for permissions to do stuff it obviously didn’t need, B) refused to run unless given those permissions, and C) wasn’t obviously garbage.

                What we do see though is that most apps on the iOS App Store and websites on the web, include analytics packages which will gather as much info on you as possible and send it back home as telemetry data. When Apple, for example, put the contacts database behind a permission wall, the effect wasn’t that every app suddenly started asking to see your contacts. The effect was that apps stopped snooping on users’ contacts.

                I won’t pretend that a capability/permission system is perfect, because it isn’t. But in the cases where it has already been implemented, the result clearly seems to be improved privacy. I would personally love to be asked for permission if a game tried to read through my ~/.ssh, access my webcam or record my screen, even if just to uninstall the game and get a refund.

                Yes, letting software vendors circumvent whatever little amount of scrutiny software packagers add, that will surely improve security!

                I mean, if you wanna complain about distros which use snaps and flatpaks for FOSS software, go right ahead. I’m not a huge fan of that myself. I’m talking about this from the perspective of running closed source software or software otherwise not in the repos, where there’s already no scrutiny from software packagers.

              2. 3

                There’s probably evidence from existing app stores on whether users prefer to use software that asks for fewer permissions. There certainly seems to be a market for that (witness all the people moving to Signal).

              3. 3

                But hey, at least you made the life of “trustworthy” applications worse. Good job!

                “Trustworthy software” is mostly a lie. Every application is untrustworthy after it gets remotely exploited via a security bug, and they all have security bugs. If we lived in a world without so much memory-unsafe C, then maybe that wouldn’t be true. But we don’t live in that world so it’s moot.

                Mozilla has its faults, but I trust them enough to trust that Firefox won’t turn on my webcam and start phoning home with the images. I could even look at the source code if I wanted. But I’d still like Firefox sandboxed away from my webcam because Firefox has memory bugs all the time, and they’re probably exploitable. (As does every other browser, of course, but I trust those even less.)

            2. 1

              A complete solution does require sandboxing the entire application, including limiting what parts of the filesystem it can access, which daemons it can talk to, and what hardware it can access. But that’s exactly what Flatpak and Snaps attempts to do

              But that’s quite limited sandboxing, I think? To be honest I’m not fully up-to-speed with what they’re doing exactly, but there’s a big UX conundrum here because write access to $HOME allows side-channels, but you also really want your applications to do $useful_stuff, which almost always means accessing much (or all of) $HOME.

              Attempts to limit this go back a long way (e.g. SELinux), and while this works fairly well for server applications, for desktop applications it’s a lot harder. I don’t really fancy frobbing with my config just to save/access a file to a non-standard directory, and for non-technical users this is even more of an issue.

              So essentially I don’t really disagree with:

              I should be able to know that due to the technical design of the system, those closed source programs can’t listen to me through my microphone, can’t see me through my webcam, can’t read my keyboard inputs to other windows, and can’t see the content in other windows, and can’t rummage through my filesystem, without my expressed permission. That simply requires a different security model than what X and the traditional UNIX model does.

              and I’m not saying that the Wayland model isn’t better in theory (aside from some pragmatical implementation problems, which should not be so casually dismissed as some do IMHO), but the actual practical security benefit that it gives you right now is quite limited, and I think that will remain the case for the foreseeable future as it really needs quite a paradigm shift in various areas, which I don’t really seeing that happening on Linux any time soon.

              1. 2

                there’s a big UX conundrum here because write access to $HOME allows side-channels, but you also really want your applications to do $useful_stuff, which almost always means accessing much (or all of) $HOME.

                This is solved on macOS with powerboxes. The Open and Save file dialogs actually run as a separate process and update the application’s security policy dynamically to allow it to access files that the user has selected, but nothing else. Capsicum was designed explicitly to support this kind of use case, it’s a shame that NIH prevented Linux from adopting it.

                1. 1

                  This sounds like a good idea! I’d love to see that in the X11/Wayland/Unix ecosystem, even just because I hate that awful GTK file dialog for so many reasons and swapping it out with something better would make my life better.

                  Still; the practical security benefit I – and most users – would get from Wayland today would be very little.

              2. 2

                I don’t really fancy frobbing with my config just to save/access a file to a non-standard directory

                If a standard file-picker dialog were used, it could be granted elevated access & automatically grant the calling application access to the selected path(s).

          2. 5

            I think “broken” is too loaded; “no longer fit for purpose” might be better.

          3. 2

            Well, the security model is simply broken.

            I agree that a lot of focus is put on security improvements compared to Wayland’s other advantages (tear-free rendering being the one most important to me). But it’s still an advantage over X, and I like software which is secure-by-default.

          4. 1

            How many actual exploits have been found with this?

            They were very common in the ‘90s, when folks ran xhost +. Even now, it’s impossible to write a secure password entry box in X11, so remember that any time you type your password into the graphical sudo equivalents that anything that’s currently connected to your X server could capture it. The reason it’s not exploited in the wild is more down to the fact that *NIX distros don’t really do much application sandboxing and so an application that has convinced a user to run it already has pretty much all of the access that it needs for anything malicious that it wants to do. It’s also helped by the fact that most *NIX users only install things from trusted repositories where it’s less likely that you’ll find malware but expect that to change if installing random snap packages from web sites becomes common.

        2. 4

          It’s good that Wayland allows for sandboxing

          If I wanted to sandbox an X application, I’d run it on a separate X server. Maybe even an Xnest kind of thing.

          I’ve never cared to do this (if I run xnest it is to test network transparency or new window managers or something, not security), so I haven’t tried, but it seems to me it could be done fairly easily if someone really wanted to.

        3. 2

          Whoa, I’ve never heard about the GNOME issues (mostly because I’m in a bubble including sway and emersion, and what they do looks sensible to me). That sucks though, I hope they somehow reconcile.

          Regarding Nvidia I think Simon mentioned something that hinted at them supporting something that has to do with Wayland, but I could just as easily have misunderstood.

      3. 9

        Wayland prohibits your desktop applications from capturing keystrokes or recording other apps’ screens by default

        No, it doesn’t. Theoretically it might enable doing this by modifying the rest of the system too, but in practice (and certainly the default environment) it is still trivial for malware to keylog and record screen on current Wayland desktop *nix installs.

        1. 4

          it is still trivial for malware to keylog and record screen on current Wayland desktop *nix installs.

          I don’t think that’s true. The linked article says recording screens and global hotkeys is “broken” by Wayland. How can it be so trivial for “malware” to do something, and absolutely impossible for anyone else?

          Or is this malware that requires I run it under sudo?

          1. 10

            It’s the difference between doing something properly and just doing it. Malware is happy with the latter while most non malware users are only happy with the former.

            There are numerous tricks you can use if you are malware, from using LD_PRELOAD to inject code and read events first (since everyone uses libwayland this is really easy), to directing clients to connect to your mitm Wayland server, to just using a debugger, and so on and so forth. None of these are really Wayland’s fault, but the existence of them means there is no meaningful security difference on current desktops.

            1. 2

              I don’t know if I agree that the ability to insert LD_PRELOAD in front of another application is equivalent to sending a bytestring to a socket that is already open, but at least I understand what you meant now.

        2. 5

          I’m sick of this keylogger nonsense.

          X11 has a feature which allows you to use the X11 protocol to snoop on keys being sent to other applications. Wayland does not have an equivalent feature.

          Using LD_PRELOAD requires being on the other side of an airtight hatch. It straight-up requires having arbitrary code execution, which you can use to compromise literally anything. This is not Wayland’s fault. Wayland is a better lock for your front door. If you leave your window open, it’s not Wayland’ fault when you get robbed.

          1. 7

            Indeed, it’s not waylands fault, and I said as much in response to the only reply above yours, an hour and 20 minutes before you posted this reply. You’re arguing against a straw man.

            What is the case is that that “airtight hatch” between things that can interact with wayland and things that can do “giant set of evil activities” has been propped wide open pretty much everywhere on desktop linux, and isn’t reasonably easy to close given the rest of desktop software.

            If you were pushing “here’s this new desktop environment that runs everything in secure sandboxes” and it happened to use wayland there would be the possibility of a compelling security argument here. Instead what I see is people making this security argument in a way that could give people the impression it secures things when it doesn’t actually close the barn doors, which is outright dangerous.

            In fact, as far as I know the only desktop *nix OS that does sandbox everything thing is QubesOS, and it looks like they currently run a custom protocol on top of an X server…

            1. 3

              Quoting you:

              Wayland prohibits your desktop applications from capturing keystrokes or recording other apps’ screens by default

              No, it doesn’t.

              Yes, it does. Wayland prohibits Wayland clients from using Wayland to snoop on other Wayland clients. X11 does allow X11 clients to use X11 to snoop on other X11 clients.

              Other features of Linux allow you to circumvent this within the typical use-case, but that’s a criticism of those features moreso than of Wayland, and I’m really tired of it being trotted out in Wayland discussions. Wayland has addressed its part of the problem. Now it’s on the rest of the ecosystem to address their parts. Why do you keep dragging it into the Wayland dicsussion when we’ve already addressed it?

              1. 7

                This

                Wayland prohibits your desktop applications from capturing keystrokes or recording other apps’ screens by default

                And this

                Wayland prohibits Wayland clients from using Wayland to snoop on other Wayland clients.

                Are two very different statements. The latter partially specifies the method of snooping, the former does not.

                Why do you keep dragging it into the Wayland dicsussion when we’ve already addressed it?

                I do not, I merely reply to incorrect claims brought up in support of wayland claiming that it solves a problem that it does not. It might one day become part of a solution to that problem. It might not. It certainly doesn’t solve it by itself, and it isn’t even part of a solution to that problem today.

      4. 4

        X’s design has many flaws, but those flaws are well known and documented, and workarounds and extensions exist to cover a wide range of use cases. Wayland may have a better design regarding modern requirements, but has a hard time catching up with all the work that was invested into making X11 work for everyone over the last decades.

        1. 3

          X’s design has many flaws, but those flaws are well known and documented, and workarounds and extensions exist to cover a wide range of use cases.

          Once mere flaws become security issues it’s a different matter though.

          [Wayland] has a hard time catching up with all the work that was invested into making X11 work for everyone over the last decades.

          This may be true now, but Wayland is maturing as we speak. New tools are being developed, and there isn’t much missing in the realm of protocol extensions to cover the existing most-wanted X features. I see Wayland surpassing X in the next two, three years.

          1. 2

            Yeah, I started to use sway on my private laptop and am really happy with it. Everything works flawlessly, in particular connecting an external HiDPI display and setting different scaling factors (which does not work in X). However, for work I need to be able to share my screen in video calls occasionally and record screencasts with OBS, so I’m still using X there.

      5. 4

        I wonder if X’s security model being “outdated” is partly due to the inexorable slide away from user control. If all your programs are downloaded from a free repo that you trust, you don’t need to isolate every application as if it’s out to get you. Spotify and Zoom on the other hand are out to get you, so a higher level of isolation makes sense, but I would still prefer this to be the exception rather than the rule.

        In practice 99.9% of malicious code that is run on our systems is done via the web browser, which has already solved this problem, albeit imperfectly, and only after causing it in the first place.

        1. 4

          If all your programs are downloaded from a free repo that you trust, you don’t need to isolate every application as if it’s out to get you

          I completely agree, as long as all of my programs are completely isolated from the network and any other source of untrusted data, or are formally verified. Otherwise, I have to assume that they contain bugs that an attacker could exploit and I want to limit the damage that they can do. There is no difference between a malicious application and a benign application that is exploited by a malicious actor.

          1. 1

            all of your programs are completely isolated from the network?

            how are you posting here?

            1. 2

              They’re not, that’s my point and that’s why I’m happy that my browser runs sandboxed. Just because I trust my browser doesn’t mean that I trust everyone who might be able to compromise it.

              1. 1

                that makes sense for a browser, which is both designed to run malicious code and too complex to have any confidence in its security. but like i said i would prefer cases like this to be the exception. if the rest of your programs are relatively simple and well-tested, isolation may not be worth the complexity and risk of vulnerabilities it introduces. especially if the idea that your programs are securely sandboxed leads you to install less trustworthy programs (as appears to be the trend with desktop linux).

                1. 2

                  Okay, what applications do you run that never consume input from untrusted sources (i.e. do not connect to the network or open files that might come from another application)?

                  1. 1

                    I don’t think you are looking at this right. The isolation mechanism can’t be 100% guaranteed free of bugs any more than an application can. Your rhetorical question is pretty far from what I thought we were discussing so maybe you could rephrase your argument.

      6. 1

        This argument seems similar to what happened with cinnamon-screensaver a few weeks ago:

        https://github.com/linuxmint/cinnamon-screensaver/issues/354#issuecomment-762261555 (responding to https://www.jwz.org/blog/2021/01/i-told-you-so-2021-edition/)

        It’s a good thing for security (and maybe for users in the long term once they work again) that these usecases are broken, but it is not a good thing for users in the short term that these usecases don’t work on Wayland.

    10. 2

      Just as an FYI that is a pretty dated document. I want to say I first came across it well over a decade ago. I am not sure if John kept it current.

      Then again, one of the charms of the base R system is that what was said then will “almost surely” still be valid today as compatibity is a very important component of the base R system.

      1. 1

        Thanks for the comment. Do you happen to know a more up-to-date document with a similar premise?

        1. 2

          Check out https://learnxinyminutes.com/docs/r/ (not sure if it is any better, I just know about the learnxinyminutes site)

        2. 2

          Fair riposte. These days it is a little complicated because the R world is being altered and extended by what is being called the Tidyverse. Which is many things, among them many good ones as e.g. a focus on (first time) users, on consistency and some other things. At the same time a few of us who had known R and S from way before this came along are a little less thralled on its focus on “users as opposed to programmers” (per the “do not use tidyverse in packages” recommendation) and the somewhat different point it takes on the “stability versus innovation” continuum.

          To me some of the standard texts and dictums still rule. One of which is (quoting John Chambers here) the focus to turn “(data analytics) users into programmers”. A good tradition to uphold, and a split between “users” and “programmers” seems ill advised to me.

          If you can find (in a local library) the Venables and Ripley book “S Programming” is pretty good (even if old). As are the 2008 book by Chambers “Software for Data Analysis” and his 2016 book “Extending R”.

          Hope this helps.

        3. 2

          FWIW I write R occasionally and the doc still looks good, except that I use tidyverse now [1]. But in general you don’t have to know “modern” stuff to get things done.

          I think of the language mostly as JavaScript with vectorized operations and data types. In fact I think there is a lot of code that is both valid JS and R, like

           f = function(x, y) { return(x + y) }; f(3, 5)
          

          I just tried that in both R and JS and it works!

          The <- for assignment and the $ for member are the things that stand out to R newbies. But I usually use = for assignment.

          Two big gotchas are options(stringsAsFactors=FALSE) which should be the default, and anything with multi-dimensional arrays can be pretty broken, because R doesn’t distinguish between scalar and vectors.

          [1] tidyverse vs. base R, Python, SQL: http://www.oilshell.org/blog/2018/11/30.html


          But probably the ONLY other book I know of with R from a programming language POV rather than a stats POV is:

          https://adv-r.hadley.nz/

          (And see the pictures of all the R books I have in my blog post). As a PL person, I wondered about the material in Advanced R for a long time, and then Hadley finally went and wrote a book on it… I think it was only in the R reference manual otherwise, although even that is incomplete.

          1. 2

            Two big gotchas are options(stringsAsFactors=FALSE) which should be the default,

            It is since last April and R 4.0.0.

            anything with multi-dimensional arrays can be pretty broken, because R doesn’t distinguish between scalar and vectors

            Please be specific about broken. R has no “scalar”. Everything is a vector, sometimes of length 1. And any vector of length N can have a dimension attribute: a matrix is simply a vector with a two-d one (plus a few support operations as matrices are useful). You can create a 3-d, 4-d-, … array the same way. But statisticians rarely use those so there is little machinery for it. But it works in the base language which the post was about.

            1. 2

              Oh yes I remember seeing that change, finally!

              With regards to matrices, the fact that R has no scalars is exactly the problem. That’s another way of saying it doesn’t distinguish between 0 and 1 dimension.

              The consequence of that is that operations that increase the dimension result in confusion between N and N+1 dimensions in general! Several years ago I fixed like 10 bugs in a single piece of code related to that issue. It would work for dimension > 1, but fail for dimension == 1 because a 3x1 matrix was fundamentally confused with a vector of length 3 by the language and standard library.

              I dug up the thread on that here:

              https://old.reddit.com/r/oilshell/comments/a2atkg/what_is_a_data_frame_in_python_r_and_sql/eazlrl2/

              I can probably come up with a concrete example, but bottom line is that I use Python for linear algebra (which is rarely in any case), and R for data manipulation.


              edit: From reading over that thread again, it is the simplify= issue. This gotcha doesn’t appear in the original post, but I believe it will confuse anyone who has ever used Matlab, NumPy, or Julia. I can’t say for sure but I’m pretty sure none of them have that issue. But I don’t think it is limited to simplify as far as I remember – that’s just a symptom of the problem with the data model itself.

              1. 1

                R has no scalars is exactly the problem.

                It’s a feature. The language, written to analyse data, is naturally vectorised. You won’t find too many actual R users who dislike that.

    11. 2

      One “feature” of HN that I really dislike is the no-context submission of random Wikipedia links, so I’m flagging this as off-topic.

      On-topic would be either submitting the actual article itself, or a post or article discussing it.

      1. 1

        Fair point - thanks for the explanation.

        I thought about posting the actual article but then went for the Wikipedia article for two reasons: 1) it is easier to read on many devices and 2) the article summary on the Wikipedia page is quite good (the actual article is only about 2 pages long).

    12. 3

      Probably not a secret, but the book Practical Vim is a good read for “intermediate users”.

    13. 2

      Don’t really agree with the title. The core logic of a program can (optionally) interact with humans and/or other systems. With neither, one is left with a “run to completion” program that typically transforms its OS supplied inputs into outputs.

      A “user interface” is associated with human interaction, whereas an API is associated with interaction with other systems. That human interaction is in terms of the user of the program, not the producer.

      That programmers are human, might mean that they need to program against an API, but surely that doesn’t mean that the term “user interface” should be extended to it?

      1. 4

        That programmers are human, might mean that they need to program against an API, but surely that doesn’t mean that the term “user interface” should be extended to it?

        I suppose you could argue if it’s the best term, but to me it seems that there are two layers: implementation and exposed API. The implementation is all about doing “stuff” and interacting with the computer, whereas the exposed API is about giving the user access to this in a reasonably convenient way.

        The API part very much seems like a “user interface” to me, with some good and bad ways of doing things, starting with everyone’s favourite bikeshed of naming things, but also deciding things like if you want one function which accepts 5 parameters, or 5 functions which accept one each. It’s really hard to say something general about this, since it depends on what those parameters are, environment conventions/possibilities, etc. but those kind of decisions don’t really matter to “the program” or “the computer”, but can very much matter for the humans dealing with all of it.

        1. 2

          Another way to look at it is from the perspective of internal DSLs. Martin Fowler, for example, wrote that

          an internal DSL is nothing more than a quirky API (as the old Bell labs saying goes, “Library design is language design”).

          When viewing the API as a language it is probably more obvious that it is made for humans.

      2. 3

        API’s are how programmers (indirectly) interact with a library. After all, most of the code is written by humans, and is therefore interacted with by humans. API’s are the way humans interact with foreign code, albeit seemingly indirectly due to the apparent interaction being automated. But the thing about automation is, somebody had to setup it, and that somebody needs to interact with both parts to do it. That somebody is usually a programmer, and he usually interacts with an API’s to combine the parts.

        1. 2

          Not really: API documentation is how programmers interact (directly) to discover a service.
          They use it to hardcode an agent that will interact (directly) with the service resources.

          Unless the API is using “in-band” documentation, there is no reason for a programmer to use it except to make some manual tests.

          1. 4

            How do you instantiate objects or call methods or functions in an API using only the documentation of the API and not the API itself? When I write software, I use the APIs of all of the libraries that I use. If those APIs are well designed, then I don’t need to consult the documentation very often because they use the same abstractions everywhere, consistent naming, and so on. None of this matters to the program as it runs, all of it matters to me as I write the program.

          2. 1

            I would argue that the action of “hardcoding” is the interaction with the service. But this very much relies on how do you define interaction. Does manipulating dangerous substances with a manipulator arm that you control is an interaction? What about a stick? What about gloves? I’d argue that as long as you control how something interacts with something else you as well interact with it.

            1. 1

              You are not the program you’re running.

              1. 5

                I’m also not the stick that I’m using to poke a bear. But I think most people would say that I’m interacting with the bear - even if through a proxy. In API’s case, the program is my stick, and API is the bear.

                1. 1

                  You seem to like analogies :) But I’m not sure it’s a good idea to convey the subtleties of abstract concepts and vocabulary that are already conflated and misunderstood by many.
                  If the program was your stick, then you’d have to be there at all times for it to function. Is that so? Are you the actor that makes your program run? No. You’re the programmer that used the documentation of the API to create an agent of that API.

                  In bear terms, you’re the guy who created the stick and the creature that’s holding it, and this creature with its stick is the one interacting with the bear, not you.

                  But I guess we’re arguing about meaning and definitions, and I get what you’re saying, but I’m still wondering why no one wants to differentiate those 2 very important parts of API interactions: documentation and usage.
                  It seems like everyone agrees it’s the same thing?

    14. 4

      I never had much respect for alarmism. We’ve had it, in regard to climate, for decades, and I only too well remember Al Gore warning us in 2008 of an ice-free Arctic by 2013, just to give one of many examples. Greta Thunberg is the next up-and-coming generation of climate alarmists and given we do in fact have a global warming, the human factor is yet to be assessed (consider we are easing out of a small ice-age that just, out of chance, had its lowest point in the mid 1800’s when humans started measuring temperatures systematically).

      However, I still wholeheartedly support renewable energy and resource savings, because we live on a finite planet with finite resources. We should do anything to save resources and energy, but not fall in panic over it or embrace ridiculous measures that are not sustainable in the long term. Maybe it’s needed to push the majority of people, but as a rational person I feel insulted by this.

      Measuring everything in “CO2 emissions” is valid, but for a different reason, in my opinion, than to mitigate the effects on the atmosphere: The carbon we emit comes from fossil fuels, which are one finite resource I think should not be “wasted”. Given “CO2 emissions” directly correlate with carbon-based fuel-consumption, it may be a bit mislabeled, but generally valid.

      In terms of web development: Stop bloating your websites with too much CSS, JavaScript and excessive markup and reduce the transferred weight, but don’t panic over it or say that a website is “killing the planet”. This is an industry-wide problem and needs to be solved at scale. When this doesn’t change, your website won’t make much of a difference compared to the few major players.

      1. 16

        the human factor is yet to be assessed

        I thought that in 2020 it is a common knowledge that humans are without a doubt responsible for global climate crisis. And temperatures are measured also by other means than direct ones. That includes geological ones.

        1. 3

          Indeed only a fool would say that we humans, who affect the planet in so many profound ways, have no influence on the climate. The question is: How much? An everlasting ethos, in my opinion, is resource-saving, but it needs to be balanced so we don’t throw away what we’ve achieved as a species.

          1. 11

            What is missing in this analysis by Carbon Brief? Most of the current natural phenomena actually contribute to global cooldown and work in our favour. Humanity carbon footprint managed to beat even that.

            1. 4

              Climate is extremely complex, and one can’t really predict most things. I may bring out a strawman here, but how can we be so certain about centennial climate predictions (2°C-goal until 2100, for instance) when our sophisticated climate models can’t even accurately predict next week’s weather?

              But as I said in my first comment, my biggest problem is the alarmism and I’m not even denying the human influence on world climate. So I’m actually on your side and demanding the same things, only with a different viewpoint.

              1. 9

                how can we be so certain about centennial climate predictions (2°C-goal until 2100, for instance) when our sophisticated climate models can’t even accurately predict next week’s weather?

                Because weather and climate are not the same. We can’t model turbulent flow in fluid systems, but we can predict when they change from laminar to turbulent on a piece of paper. We can’t model how chemical reactions actually work at an atomic level, but whether or not they should take place is another simple calculation. We can’t model daily changes in the stock market, but long-term finance trends are at least vaguely approachable.

              2. 16

                I’m not even denying the human influence on world climate.

                you said, “the human factor is yet to be assessed,” when it has been assessed again and again by many well-funded organizations. that’s denial, bucko

                1. 1

                  No, it’s not denial and science is not a religion. Assessment means studying an effect, and I still do not think that the foregone conclusion of 100% human influence is substantial. It’s less than that, but not 0%, which would make me a denier.

                  1. 1

                    Assessment means studying an effect

                    so by “the human factor is yet to be assessed,” did you mean that the effect has not been studied? are you not denying that the human factor has been studied?

                    typically the category of “denial” doesn’t mean you think a claim has a 0% chance of being correct; most people are not 100% certain of anything and the concept of denial is broader than that in common speech. organizations of scientists studying climate change are very confident that it is largely human caused; if your confidence in that claim is somewhere nominally above 0%, it would still mean you think it is most likely untrue, and you would be denying it.

                    1. 1

                      An effect can be heavily studied but still inconclusively. From what I’ve seen and read, the human factor is obviously there and not only marginally above 0%, most probably way beyond that, but I wouldn’t zero out other factors either. If that means denial to you, then we obviously have different definitions of the word.

                      1. 1

                        saying the human factor hasn’t been assessed casts doubt on it. now you are saying it is “obviously there” which is quite different.

                1. 5

                  The only thing I can do, as an individual, is to adapt, prepare and overcome. In my initial comment, I already mentioned an example for wrong alarmist predictions, and they even date back to the 60’s! Moving the fence pole and saying the arctic ice will have disappeared in the next n years won’t help bring me on board. Al Gore back then cited “irrefutable” science and I remember being presented his movie in school, but his predictions all proved to be wrong.

                  Still, we are on the same side, kel: Our footprints are unsustainably large, and I as an individual strive to reduce it whenever I can. The truth is, though, that even Germany, which only contributes 2% to global carbon emissions, doesn’t play much a role here, and the big players need systemic change.

                  It’s funny, actually, given this pretty much rings with the individual argument of slimming down your website: When Google, Youtube, Medium, etc. don’t move along, it doesn’t make much of a difference.

                  1. 11

                    The only thing I can do, as an individual, is to adapt, prepare and overcome.

                    It is both frustrating and liberating how little influence an individual has. However, in the moment you decided to post a number of comments on this site, you contribute to the public opinion forming process. I think that this gives you much more influence than immediately obvious. Discussions on sites like lobste.rs are read by many people, and every reader is potentially influenced by the opinions you or anyone else express here. And with great power comes great responsibility ;-) With that in mind, I am glad that other commenters challenged your initial comments about climate “alarmism” and prompted you to clarify them.

                  2. 7

                    germany is the most powerful state in the european pole of the tripolar world economic system. it has much to say about how other countries it is economically tied to are allowed and enabled to industrialize and maintain their standard of living. germans own plenty of carbon-emitting capital in countries that don’t have the same level of regulation, and they need to be made accountable for the effect they have on the world.

          2. 3

            so we don’t throw away what we’ve achieved as a species

            Do you truly think silly performative ecological politics are going to “throw away” your first world niceties or are you talking about how ecological collapse will likely trigger progressively even more massive failures in supply chains as we inevitably blow through 1.5C

            1. 3

              There’s more to the world than economics, e.g. achievements in human rights and freedoms. But I don’t want to go too off-topic here (we are on Lobste.rs, after all).

              1. 5

                achievements in human rights and freedoms

                None of this will matter when people living in most affected areas – that are suffering from climate crisis already (thanks to droughts, lands becoming effectively uninhabitable etc.), not to mention what will happen in the following years – will come to our first world demanding a place to live. And we will point our guns at them. As one of the commenters said: “Desperate people will do desperate things”. And all of this will happen over years, decades. Painstakingly.

                Unfortunately some people will write it off as plain alarmism while dismissing well proven scientific position. And the position is: I want to have good news but it looks really fucking bad. I’d love to ignore all those facts just to live a happier life but I find it hard. It saddens me deeply that behind that facade of freethinking, you pretty much made up your mind for good. I do not mean to insult you. It’s just the way you speak in all your comments that makes me think that way. I hope I am wrong. Eh, shame.

                One could consider famous Newsroom piece about climate change as an alarmism but unfortunately it seems to be very on point.

      2. 9

        The planet will be fine. It’s the people who are fucked.

        George Carlin

        I almost want to agree with you, except that underestimating the impact of climate change has already cost society massively and climbing.

        Firstly, if you believe that our current rate of temperature change is historically typical, there’s an xkcd comic for you.

        I will go as far as to say that to consider climate change an existential threat are perhaps looking at it the wrong way. But I’m not about to start undermining their cause in this way because people tend toward apathy toward long-term threats and the cost of underestimating climate change is far greater than the risk of overestimating it. Climate change has already begun to have direct costs, both monetary and humanitarian.

        As an example of monetary cost, in Gore’s documentary he presents a demonstration of rising sea levels around Manhattan Island and makes a point that the September 11 memorial site will be below sea level.

        This might be true, but below sea level does not mean underwater. The flooding projection makes the assumption that humans are either going to do nothing about it and drown or are going to pack up New York and leave. I think neither scenario is likely.

        What will happen is that the rising sea level will be mitigated. The city will build huge-scale water-control mechanisms (such as levees). The cost of living on the island will rise sharply. Once in a while, this system will fail, temporarily flooding the homes of millions of people. They will bail it out and go on living.

        Not so bad, right? The catch is that the projected cost of this, in purely financial terms, is predicted to vastly outweigh the cost of reducing pollution now. And we don’t need to hit discrete targets to see a benefit – every gram of CO2 that we don’t emit today will reduce the amount of water in a nearly-certain future flooding event.

        This is beside the humanitarian cost.

        Climate change does not come without opportunities. Likely, the farming season in Canada and Russia will lengthen, leading to more food produced in those countries. Cool, but meanwhile in other places, the drought season will lengthen. People won’t be magically transported from one place to another; there are logistical, political, and sociological obstacles. People stuck in those regions will become increasingly desperate, and desperate people do desperate people things. With today’s weapons technology, that’s the kind of situation that really could lead to humanity’s extinction.

        So please be careful with the point-of-view that you present. You might not be wrong, but contributing to a culture that underestimates the oncoming danger is exactly what got us here in the first place.

        1. 5

          I’m not denying the danger or playing it down, and we can see current effects of global warming. We humans must adapt to it, or else we will perish. It would not be far-fetched to assume that this global warming might even lead to more famines that can kill millions of people.

          The problem I see is the focus on CO2, but resource usage has many forms. Many people find pleasure in buying EVs, while charging them with coal power and not really reducing their footprint a lot (new smartphone every year, lots of technological turnover, lots of flights, etc.). I’m sure half of the people accusing me of “playing it down” have a much larger “CO2 footprint” (I’d rather call it resource footprint) than I do.

      3. 9

        The climate has not changed like this before in human timescales. https://xkcd.com/1732/

        Today, denying human-induced climate change requires more than disagreeing with the scientific consensus on future predictions, it requires denying current events. The climate crisis is already here, and it already has a death toll.

        The good news is that you don’t need to update your understanding and stop swallowing narratives produced by fossil fuel corporations, although we could certainly use all the help we can get. You just need to get out of the way of people like Greta who are taking meaningful action to avert the climate crisis on a systemic level. If you live in the US, Sunrise Movement are extremely effective young organizers who deserve your respect. If all you have to offer is sniping from the sidelines, maybe you should rethink your contributions. Have you actually done anything to make the world a better place, or do you just complain about people who do the work?

        1. 5

          Given the many factors influencing climate itself and the models built to predict it, studies greatly diverge from each other. Big fossil fuel corporations cite the least-alarmist ones, and environmental extremists cite the most-alarmist ones. As always, the truth lies in the middle.

          It’s a great shame that people die from this, given it’s a negative effect of the fact that the entire industrial age (including urbanization and expansion) was built on the assumptions of a small ice age that we had until the 1850’s and 1900’s. The increasingly warm global temperature has its toll.

          My favourite example is the nordic spruce, which is the main tree for corporate wood production in Germany. It originally comes from the mountains, but was increasingly used during the industrialization and planted in normal plane land, which worked because the weather was still relatively cool. The few degrees of warming leads to a massive weakness of the trees, and our German forests, which are substantially made up of spruce monocultures, are infected with numerous diseases and pests because of this.

          Over the years I’ve read so many alarmist reports by big scientific players which proved to be completely false, which is okay. Scientists can err, especially with something as multivariate as climate. My view is that we should not only look at “CO2 emissions” as a mantra, but adapt to the changing climate (diversify forests, etc.) instead of turning this into yet another speculator’s paradise with CO2-certificates which help nothing but shift wealth.

          The real damning truth is the following: I live in Germany, and if one flipped a switch that would wipe Germany and all its inhabitants from the face of the earth, the global CO2 emissions would only drop by 2%. As always, it’s the big players (USA, China, etc.) that need to change systemically.

          Have you actually done anything to make the world a better place, or do you just complain about people who do the work?

          Not to sound too harsh, but I basically don’t matter, just like the individual Chinese or US person matters. Electronic vehicles won’t make a difference, because CO2 emissions are just offset to developing countries where the battery-components are mined and processed. Charging an EV in Germany means coal power, no matter how much you buy “eco” electricity, as it’s just a big shuffling on the energy market.

          I do my part not buying a phone or computer every year, driving a used car (Diesel), which is still more environmentally friendly than buying a new car which needs to be produced in the first place, buying regional, etc. These things, as an individual, make much more of a difference than buying a Tesla and continuing living the large lifestyle most people have gotten used to.

          1. 11

            As always, the truth lies in the middle.

            I want to call out this both-sides-ism. Basic shifting of the Overton Window can cause you to believe insane things if you assume that the truth always lies in the middle. Reasonable positions can seem extreme if you live in a society that, for example, has been shaped by fossil fuel billionaires for decades.

            It’s also wrong to ignore worst-case scenarios.

            There has been a great deal of discussion around the IPCC reports, which are very conservative (by which I mean cautious about only making predictions and proposals for which they have a great deal of evidence). Unlikely but catastrophic possibilities, such as the terrifying world without clouds scenario, also deserve attention. Beyond that are the “unknown unknowns”, the disaster scenarios that our scientists are not clever enough (or do not have the data) to anticipate.

            Global nuclear war or dinosaur killer asteroid impacts may seem unlikely today, but if we do not prepare for and take steps to avoid such cataclysms, someday we will get a very bad dice roll and reap the consequences.

            In other words, the obvious predictable results of global heating on our current trajectory are bad enough, and I do not consider discussing them to be alarmism, but edge cases that might be reasonably seen as alarmism I feel are underappreciated, rather than overpublicized as you seem to believe.

            In other words, the truth, rather than lying in the middle, might be significantly worse than any messaging from the mainstream climate movement suggests.

          2. 6

            I’ll just say that personal consumption habits are not what I’m talking about, although I can see why you would bring them up, given the article we are commenting on is about changing personal website design.

            Sustainability, and justice for those who suffer most in the climate crisis, will require changing how our society functions. It will require accounting for the true costs of our actions, and I’m not convinced that capitalism as we know it will ever hold corporations accountable for their negative externalities. It will require political change, on a local, national and global level. It will require grassroots direct action from the people. You as an individual can do little, but collectively I assure you we can change the world, for the better instead of for the worse.

            1. 6

              The role of the collective against the individual is of course a truism. The real costs of a product are often hard to reflect on. One good example is sustainably produced meat, which costs 6 times more than “normal” meat you can buy at the supermarket. Reducing meat consumption to once a week (instead of almost every day, which is insane) would greatly reduce the footprint of an individual, but I don’t hear greenpeace talking about reducing meat intake, even though it makes up 28% of global greenhouse gas emissions.

              Instead, we are told to “change society” and accept new legislation that fundamentally change not only our economies, which deserve some reform in many places, but also individual freedoms for questionable benefit other than certain profiteers in certain sectors.

              So I hope I didn’t come across as someone denying the effects of climate change. Instead, I don’t like the alarmism, which has been often debunked in the last decades, only to sell extreme political measures. A much more effective approach would be, I think, to urge people to reduce their resource footprint and allow them to make the right choices.

              To give an example, maybe the EU could stop funding mass-meat-production if they really cared about this topic at all. Because this stuff really undermines the credibility of the entire climate “movement”.

      4. 2

        (consider we are easing out of a small ice-age that just, out of chance, had its lowest point in the mid 1800’s when humans started measuring temperatures systematically).

        Have any sources so I can read more about this? First I’ve heard of this.

        1. 4

          Sure! There is a great paper called “Using Patterns of Recurring Climate Cycles to Predict Future Climate Changes” by Easterbrook et. al. (published in Evidence-Based Climate Science (Second Edition), 2016) which is sadly paywalled and I can’t fully share here, but there’s a great figure in it that shows temperature-readings from tree-rings in China.

          Between 800 and 1200, we had the global medieval warm period, which allowed people for instance to grow wine in England and is the reason why Greenland is called “green” land (because it wasn’t covered in ice when the vikings discovered around 900-1000). The temperatures were normal between 1200 and 1600, but were then followed by a “Little Ice Age” between 1600 and 1900. In general, one can indeed see that global temperatures are rising above the average over the last 2000 years, but it’s nothing unusual.

          To give one more example: Glaciers receding in Norway, due to the currently observable global warming, reveal tree logs and trading paths roughly from the Roman ages used between 300 and 1500. If you look at the aforementioned figure, this pretty much coincides with the extremely warm period beginning around 300. Even though it went below around 700, it never really go into a cold area which would’ve let the glacier “recover”, explaining that it has been used until 1500 when the next cold period (the Little Ice Age) started.

          I hope this was helpful to you!

          1. 6

            It sounds like you’re arguing that the current global temperature rise is not due to humans, or is just a natural temperature cycle coming to an end, which is extremely wrong. The slight cooling period you’re talking about did happen, but as of now both the speed and projected magnitude of the current temperature changes are unprecedented in human history.

            We can argue all day about specifically how bad things are going to get given the temperature rise, and how much someone’s stupid little personal website is going to contribute to it, but the fact that the temperature rise is man-made and is changing faster than any global temperature change ever in human history is supported by enough broad scientific consensus to be pretty much indisputable.

          2. 4

            This is a placeholder reply so I don’t forget (immediately quite busy), but there is no evidence the pre-industrial era “little ice age” was a global phenomenon.

            1. 4

              That could very well be! What I cited were results from Europe and Asia, and I would not be surprised if it turned out differently in other places of the world.

    15. 7

      Some of the problems in Firefox can probably be alleviated by using one of the user.js projects on GitHub.

      1. 9

        Definitely, but I think one problem that the author is correct in pointing out is that Mozilla has made some very questionable decisions in order to find new sources of funding.

        IMO this is a problem that the open source community can’t and shouldn’t ignore, and I think we should all be very careful about not biting the hand that feeds us and attacking the only non Google entity even attempting to provide a mass market web browser.

        So yeah it’s a tough situation.

    16. 8

      Minor nitpick:

      Furthermore, there is literally no way to tell whether your program will ever actually terminate without actually executing it.

      That is wrong. It is true that it is not possible to write a program that determines for any other program whether it terminates. But for a specific program, in particular for relatively simple algorithms as discussed in the post, it is often possible and not even hard to prove termination.

      1. 3

        Some domain specific languages exploit that: dtrace’s language is structured so that (without some heinous hacks) it’s impossible to not terminate, and given that code written in that language is injected into other code, that’s a pretty useful trait.

    17. 2

      I agree that learning the idioms of a new programming language is important. That’s one thing I liked about Dan Grossman’s MOOC on Coursera. It uses Standard ML, Racket, and Ruby to present different programming paradigms. For example, the segment about Ruby features a very strict interpretation (at least for my taste) of OOP, which was great for contrasting it to the FP approach in ML.

    18. 3

      Wasn’t this domain blocked for spamming ring?

      1. 4

        Submitters were, domain was not.

      2. 1

        I realize that submissions from this domain were kind of spammy in the past. I am not affiliated with the site.

        In contrast to the earlier submissions, I found this article interesting in terms of language design and evolution, and more likely to spark discussion.

    19. 2

      Scala is a great language, but this seems to focus more on usage metrics and history than the Scala language itself.

      I’ll tell you in one sentence a major draw to Scala, more than this whole article: Using the parallel collections library along with functional programming to do concurrent operations on large collections. Parallel programming is made trivial. There are concise examples on the overview page in the standard library documentation.

      https://docs.scala-lang.org/overviews/parallel-collections/overview.html

      1. 1

        IME parallel collections are dangerous. In an interactive application any operation expensive enough that they’d be appealing should probably also be cancellable. They’re actually a good example of one of one of Scala’s ills: a random masters thesis bolted onto the language. They don’t mix well with other Scala defaults, like pervasive use of linked lists, which must be copied into a vector by .par.

        I’m sure they have their uses, but I think that they belong in a library rather than the core language.

        1. 2

          I’m sure they have their uses, but I think that they belong in a library rather than the core language.

          Seems like the parallel collections have indeed been factored out into a library.

    20. 1

      Nice post! Where/how do you store the passphrase for automated backups?

      1. 2

        Thanks! The systemd unit that does the backups talks to pass to get the passphrase. It in turn relies on gpg-agent to not have to ask to unlock the password store. This works for me because I do backups during the day and my email client keeps the gpg-agent awake.

        1. 1

          Aren’t you stuck in a chicken and egg problem ? You encrypt your backups using a password, saved in a store. If you loose your whole $HOME, how do you recover ? You need the password, which itself need a gpg key, which is backed up, but encrypted right ?

          Or maybe you backup your gpg keys and password store using other means ?

          1. 2

            Not certain if this is what they meant, but I assume the idea is that they both memorize the passphrase (in case recovery is needed), and also don’t want to keep typing it in for automated daily backups.

            1. 2

              Yes, except 1password has it memorized for me.