It’s useful to read this after his previous entry, about X, which substantially sets up the comparison he tries to make here.
At work, I often can hear the two sides of the story, with some coworkers that dispite everything systemd is or tries to be, and other that are convinced it is indeed a huge improvement over what Linux used to have. I myself still am able not to care very much about it. My system boots just fine and works the way I expected to, so I am happy with it.
This could be your experience even if e.g. 5% of people found that systemd made their system unbootable after an OS update. At the same time, you’d understand why that 5% would hate it so much.
More importantly, it will he the last init system we will get considering how deeply and how widely it has itself integrated into various systems.
A new init system would have to emulate all of it before being able to innovate on anything. At this point it probably has already become a systemd clone.
What are embedded devices like phones/tablets/car computers using? I’m pretty sure they’re not adopting systemd since they have their own parallel universe of distros. Are they using plain init and shell scripts?
With Android and almost all other phone OS’s except iOS, it seems like Linux is still getting bigger on devices. (Although it’s probably equally as big or bigger on the server.)
I think systemd is solving more of a desktop problem… where you are unplugging devices and where you have diverse hardware. On headless servers you might not need many of the features that systemd provides.
Ironically, desktop is where Linux is used the least, but it appears to be driving a lot of the design decisions. Upstart was also designed for the desktop as far as I can tell.
Although I guess both servers and devices want fast startup, and systemd apparently helps there.
My hope is that the embedded Linux architecture will take over the desktop and perhaps get rid of systemd along with it. There have been some experiments in using Android for desktops although I haven’t followed them closely.
I think systemd is solving more of a desktop problem
I see this brought up a lot, but at least in my hobbyist server admin opinion systemd has the most advantages on the server. It is so so so much easier to write a service configuration that starts correctly, stops correctly, and properly waits for the right things to go up/down in both cases. No more 100s of lines of bash scripts, just a few lines of config and most services are going to “just work”.
I rarely have to write service definitions on the desktop.
Yeah that’s true. init.d scripts are too often copied and pasted and buggy. There needs to be a better way to share this logic. Config files are one way, but I think a better shell language is another way, with different tradeoffs.
SailfishOS (Jolla) is using Systemd IIRC.
It does, yes.
I just came up with a list of dislikes in another social medium. Might as well post it here too!
I put on my “OpenBSD Developer” hat in the off chance that my “get off my lawn” wasn’t coming through via the above text. Maybe I can get a “Curmudgeon with a Cane and Lawn” hat?
I’ve seen a lot of people complain about binary logs. I can understand why one may dislike them, if one’s used to grep. But in this day and age, I’d argue that if you are using grep to search your logs, you are doing something wrong (and that’s with my former syslog-ng maintainer hat on, after working years in the logging field). Plain text logs is not what most big places use. For temporary storage, until they get shipped to a central place, maybe, but most people don’t work with them directly.
BalaBit (makers of syslog-ng) have been pushing their binary format, which is used in their applicance, and syslog-ng PE too. Splunk, ES + Kibana don’t use text files, either. When I was at Cloudera as a support guy, we weren’t using grep, either. We had the logs in Hadoop, and used internal tools to search them, mine them, and so on. This is what I see in most places that deal with a lot of logs: they see the text files as a source to collect, they rarely, if ever, do anything else with them than to ship them somewhere else to store in a database they can query more efficiently.
I have previously written a post on by blog about this exact topic, with a bit more details, and a followup (linked from the top of that post).
(But since binary logs are a bit off-topic, feel free to hit me up on twitter, or e-mail or whatever, to avoid going even more off-topic here. And apologies in advance for derailing the discussion a bit, but years of working with logs made me a bit sensitive about the topic. :P)
I don’t feel like it’s off topic too much.
I’d argue that if you are using grep to search your logs, you are doing something wrong
I agree, definitely a better way to do things at that scale, but not everyone needs a log shipping destination for their one off VPS that is being used for IRC and maybe a blog.
In that case, the old tools aren’t in any way suprerior, except for familiarity. But the good stuff the journal brings are useful enough to learn to query logs with journalctl, in my opinion.
Yep, and I stated that as well, however, in addition to familiarity, there is also portability. The technique for looking into logs via a “grep” (or what ever unixish tool) will apply on many more OS types, as compared to just Linux running systemd.
Yeah, if you have to maintain N different OSes, grep may make sense. Though there will be enough subtle differences that grep vs journalctl would be the least of your worries. Back in Uni, when I was part of the CS department’s sysadmin team, we had HPUX, Ultrix, Debian, RedHat, AIX, IRIX, Solaris, FreeBSD, and a bunch of others I forgot. Grepping wasn’t fun, nor consistent across these OSes. Slightly different flags, logs being elsewhere, etc. In this case, collecting logs to a central place would have made much more sense.
If you maintain Linux boxes… then portability or applicability to another OS does not matter.
grep makes sense for many other reasons – needing to create ad hoc pattern matches, as part of a text manipulation pipeline with, e.g., sed or cut … the list goes on. Until the richness of the existing text manipulation tools in Unix is matched by similar binary log commands, there’s little sense in moving to binary.
And, I’ll mention, ‘most big places’ do continue to use text logs.
You can combine a binary log query with other tools, just like you combine it with grep. You replace cat logfile | grep | whatever with logquery ... | whatever, or even logquery ... | grep | whatever. There’s nothing stopping you doing so. If the log query tool supports regexps, it will always be more powerful than grep, because it also understands structure (at least to some extent, and if you process your logs - which you should), and can do optimizations grep can not. They already match grep’s abilities, and surpass it in many ways. They also compose with your standard unix tools.
cat logfile | grep | whatever
logquery ... | whatever
logquery ... | grep | whatever
And yeah, grep is useful elsewhere too. It is worth knowing. But learning to query logs in a much more powerful way is also worth learning, and accepting as a valid practice.
could you provide an example of ‘a much more powerful way’ in practice? I’ve only been doing log diving for 27 years so maybe I’m missing something.
A tad late to respond, as this thread fell off my radar, but, a few examples where queries are superior to grep & friends (all of these assume that you have all your logs at a central place, otherwise it would be a horror to grep them when the data you want spans multiple machines):
Whenever it comes to dates. It is much easier to write date="[2013-12-14 TO 2015-04-11]" than to figure out which files you need to grep, and filter down to dates between these only.
date="[2013-12-14 TO 2015-04-11]"
Something I do often, is follow the trail of my email: from Gnus/Emacs logs, through msmtp on localhost, postfix on my home gateway, and finally my remote VPS. If I want to see the trail of e-mail, I can just do a query with something like type=email message-id=X.
Both of these can be done with grep & friends, but they require more work from my part. I have a computer to do the hard parts. I can just shovel everything into a database, index it in various ways, and then query whatever I want. I can even adjust the indexes with reasonable ease. With text-based logs, that would be a much larger task.
Plain text logs is not what most big places use.
I’m not a big place. And the big places I’ve been at all used custom logging solutions with staged storage and replication, which systemd does not provide.
I was talking about non-textual log storage, be it systemd or something else. I’m not arguing how useful (or useless) systemd’s journal is - I for example, use it just like text logs: I ship the contents somewhere else to process and store, something completely independent of systemd. The journal is nice because it’s much easier to process than most text logs, but that’s a small thing.
What I’m saying, is that binary storage is useful, and in many ways superior to text, as far as logs are concerned. A lot of custom, specialised solutions (Splunk, BalaBit’s SSB, Kibana, etc) use a non-textual database. And that’s a damn good thing they do.
…but of course syslog-ng isn’t really comparable because the protocol is well-understood and externally stable, and I can drop in rsyslog or easily implement my own if my needs aren’t those of a multi-million dollar company. The same is emphatically not true of journald.
syslog-ng’s binary format is definitely not standard, and not compatible with anything else. It’s not even open source, unlike the journal (for which writing a reader took about an afternoon from scratch, just to prove that it is possible).
By “the protocol” I of course mean syslog here, not whatever storage format a given tool uses. Unless I have somehow completely misunderstood how clients communicate with syslog-ng?
How they communicate doesn’t matter. You can forward journald-collected logs over syslog too (with rsyslog, or syslog-ng, or whatever else). It’s the storage format that most people are upset about.
“At the same time, X has never been elegant. In this it partakes of the spirit of (V7) Unix; not the side that gave us the clean Unix design ideas, but the side that gave us a kernel where active processes were just put in a fixed-size array (that was walked with for loops):
struct proc proc[NPROC];"
That was a good idea before the number of processes became large. For an embedded Linux system it would still be a good idea.
The best that was rammed into mainstream distributions by political clout.
It’s popular to say this but I don’t think the evidence supports it for at least some large mainstream distributions. The Debian committee discussion of this is public and contains plenty of people clearly attempt to judge things on a technical basis (and Debian as a whole accepted the result). Ubuntu moved to systemd because Debian did and going their own way by continuing with Upstart wasn’t tenable. As far as Fedora, Red Hat Enterprise, and Red Hat go, the deliberations are less public but there have been at least serious claims that systemd did not exactly get an easy ride (and Fedora deferred the shift to systemd at least once at the last moment, which is not quite the behavior of people being forced by politics).
To the extent that distributions feel forced to move to systemd by eg GNOME depending on it, these dependencies certainly seem to have come about because GNOME found that relying on and requiring systemd services was the best solution to their problems.
It’s certainly possible to posit that there is a wide spread behind the scenes political effort to both drive systemd into distributions and drive systemd dependencies into important desktop software like GNOME to force the hand of reluctant distributions, despite many of the decisions nominally being made in the open and at least some distributions giving their collective developers veto power over such decisions (via eg Debian GRs). But it is interesting and even striking to me that no strong evidence of such political effort has ever emerged, despite the large number of people who would have to be involved, how juicy such evidence would be, and how many people there are that do not like systemd and so would very much like such evidence. You would think that someone would have leaked internal GNOME discussions or written a disgruntled insider blog or whatever by now.
(My overall view is that systemd won the init system wars for good reasons. The comments there contain a great deal of discussion and additional points, including some from a person who was at Red Hat at the time.)
Like most conspiracies, systemd’s was highly banal and fairly obvious. Red Hat and its proxies pretended that the discussion was between debian’s poor implementation of sysvinit and systemd-as-an-init, and rushed a vote. A sustained marketing campaign, combined with an obviously concerted effort by Red Hat, along with their outsized representation in the debian “developer” voting community, caused systemd to win. From there, systemd metastasized, dropped the facade of being an init system, and quickly branched out to encompass every feature that Red Hat wants to own.
All of that is super well documented, and it didn’t need secret meetings. It was just the self-interest of a corporation that essentially owns Linux now, combined with their employees' and proxies' natural and correct incentives. Was it shady? Probably – most of the core people knew that their public intent and their private intents were in no way similar. But all they needed to do was to convince the desktop development people, who had been wandering in the desert for 20 years, that their way was a better way, and then convince enough server people that the desktop people had a good thing going, and hey presto – a terrible, tasteless, non-composing, incompetently executed operating system glued to the side of the linux kernel. hooray.
Would be nice if there was a collection of links to Debian’s decision making etc, but it actually doesn’t matter. Having disparate systems has been hell to maintain, so all it takes is a market leader to show the way and chances are they have enough followers to make the change.
Maybe it’s time to go Devuan when Squeeze is released.