That article was pretty interesting, but there was one idea that I hadn’t previously encountered. A given software implementation balances functionality against the resources (time, storage, etc.) available, but both user expectations and hardware resources change over time. The article mentions two different responses to such changes:
Both viewpoints have merit, but they’re clearly incompatible. Now that they’ve been pointed out, I suspect a lot of the technical discussions I’ve participated in over the years were at least partly based on “Hardware Supports Software” people looking at a project based on “Software Supports Hardware” ideals and boggling, or vice versa. For example, I reckon the systemd and GNOME teams are squarely in the Hardware Supports Software camp, while Software Support Hardware presumably hosts the suckless guys, and anybody who owns an Arduino.
GNOME sure, but systemd is actually pretty efficient. I had to stress test it a while back, and it really screams. Say what you will about it’s features and design choices, but it’s not slow.
I regularly run into actual corruption/freezing issues with systemd-journald. Can I assume that you only stress tested the init+supervisor portion of systemd, and none of the many other ancillary components (journald, consoled, logind, networkd, timedated/timesyncd, etc)?
I tested systemd and journald. A lot of the other auxiliary daemons didn’t exist in 2014 when I did this.
I’m curious: what does it mean for an init system to scream? Is it just that it can run stuff in parallel? It seems like it’s bounded by how fast the kernel can start new processes, right?
Yes, it is. But if your init is a bunch of shell scripts that fork 500 subprocess to start 20 services, your init cost is hugely amplified. Even so, that’s not that big of a deal just for system start up, it probably amounts to less than a second. The parallelism matters way more.
I cared because systemd isn’t just an init system, it’s a general purpose service / cgroup manager. So if you’re using systemd to manage your cgroups as part of a container system, like docker does, then you want high throughput and low latency on service creation, management, etc. Testing a heavy container based workload (not docker, docker was way too slow) I determined systemd is hella fast. At least, for the task it was performing. As far as C programs go it’s pretty slow. They put a lot of effort into safe programming and especially into maintaining stable memory usage.