1. 33
  1.  

  2. 8

    That article was pretty interesting, but there was one idea that I hadn’t previously encountered. A given software implementation balances functionality against the resources (time, storage, etc.) available, but both user expectations and hardware resources change over time. The article mentions two different responses to such changes:

    • Hardware Supports Software: software is always complex and difficult, so if we can add complexity to the platform (OS, compiler, runtime) in a way that makes applications simpler, more reliable, more maintainable with the same level of functionality, we should. If that means spending extra hardware resources on computational overhead like runtime checks and managed code, that’s a trade worth making. The article claims this was Microsoft’s mindset at the beginning of Vista’s development.
    • Software Supports Hardware: people buy hardware to fulfil some particular function, but while some software is necessary to make it work, too much software just slows everything down. Therefore, software should be restricted to what’s necessary to provide functionality, and extra hardware resources should mean everything just runs better. The article claims this was the mindset behind Apple’s iOS.

    Both viewpoints have merit, but they’re clearly incompatible. Now that they’ve been pointed out, I suspect a lot of the technical discussions I’ve participated in over the years were at least partly based on “Hardware Supports Software” people looking at a project based on “Software Supports Hardware” ideals and boggling, or vice versa. For example, I reckon the systemd and GNOME teams are squarely in the Hardware Supports Software camp, while Software Support Hardware presumably hosts the suckless guys, and anybody who owns an Arduino.

    1. 5

      GNOME sure, but systemd is actually pretty efficient. I had to stress test it a while back, and it really screams. Say what you will about it’s features and design choices, but it’s not slow.

      1. 2

        I regularly run into actual corruption/freezing issues with systemd-journald. Can I assume that you only stress tested the init+supervisor portion of systemd, and none of the many other ancillary components (journald, consoled, logind, networkd, timedated/timesyncd, etc)?

        1. 2

          I tested systemd and journald. A lot of the other auxiliary daemons didn’t exist in 2014 when I did this.

        2. 2

          I’m curious: what does it mean for an init system to scream? Is it just that it can run stuff in parallel? It seems like it’s bounded by how fast the kernel can start new processes, right?

          1. 3

            Yes, it is. But if your init is a bunch of shell scripts that fork 500 subprocess to start 20 services, your init cost is hugely amplified. Even so, that’s not that big of a deal just for system start up, it probably amounts to less than a second. The parallelism matters way more.

            I cared because systemd isn’t just an init system, it’s a general purpose service / cgroup manager. So if you’re using systemd to manage your cgroups as part of a container system, like docker does, then you want high throughput and low latency on service creation, management, etc. Testing a heavy container based workload (not docker, docker was way too slow) I determined systemd is hella fast. At least, for the task it was performing. As far as C programs go it’s pretty slow. They put a lot of effort into safe programming and especially into maintaining stable memory usage.