1. 3

    I’m currently using a Kyria as my daily driver, and I really like it. Before that, I was using a Unikeyboard Diverge 3, which is quite nice as well, although the stagger and the thumb cluster is less nice than on the Kyria.

    1. 7

      I like that this is an ocaml advocacy post in disguise :-).

      1. 13

        I read it but there is something I am still missing.

        So say I have 10,000 notes. I write a new one. How on earth could I link it with others that I’ve forgotten about? Surely I cannot re-read all of 10,000 to see which ones I should link together. I do not get this part of it.

        1. 11

          Incidently, I just stumbled upon this Zettelkasten-like note: Notes should surprise you:

          If reading and writing notes doesn’t lead to surprises, what’s the point?

          If we just wanted to remember things, we have spaced repetition for that. If we just wanted to understand a particular idea thoroughly in some local context, we wouldn’t bother maintaining a system of notes over time.

          This is why we have dense networks of links (Evergreen notes should be densely linked): so that searches help us see unexpected connections.

          This is why we take Evergreen notes should be concept-oriented: so that when writing about an idea that seems new, we stumble onto what we’ve already written about it (perhaps unexpectedly).

          The linked note inside starts with this:

          It’s best to factor Evergreen notes by concept (rather than by author, book, event, project, topic, etc). This way, you discover connections across books and domains as you update and link to the note over time (Evergreen notes should be densely linked).

          So one aspect of the answer suggested here: By writing notes about general concepts, you provoke revisiting them. Through heavy linking you now create connections between related ideas. Through further maintenance like overview notes transitive connections become more direct connections and thus closer. (As a self-refential example: This reminds me of union-find and how it merges disjoint sets).

          1. 5

            How does one find “related notes”?

            I believe the answer lies here:

            Luhmann described his Zettelkasten in different ways. Sometimes he called it a conversation partner and sometimes he described it as a second memory, cybernetic system, a ruminant, or septic tank.

            You actually just keep a big chunk of the notes in your memory. You don’t memorize their literal content, but you know of their existence, so you know approximately where to look. I would say that the Zettelkasten was a tool to enhance his own memory rather than a tool to do the remembering for him.

            The secret to keeping the existence of the largest part of 90.000 notes in memory is probably revisiting them regularly, by following links and browsing at random, both of which the Zettelkasten invites you to do. Luhmann’s notes were all centered around his single general focus area: philosophy and social science, so he was thinking about the full body of work all the time.

            I think the brain is better at this sort of thing than we generally acknowledge. I actually believe that this may be an argument for using an analogue Zettelkasten. You come up with an idea and your brain hints that you have thought about something related before, you just don’t know when and what exactly. You might vaguely remember a particularity of a card though, maybe a tear, a stain or even the smell. So you go browsing, and soon enough you run into a connected idea, which leads you to the subweb of existing ideas that are relevant to the new idea. This rummaging around has the additional benefit of refreshing your memory of existing notes.

            The efficiency on such a large scale is probably for a large part determined by your ability to leverage the hints your unconsciousness gives you, which I think is why Luhmann describes his Zettelkasten in varying, not very rational or strict ways, in an effort to capture this unconscious process.

            1. 4

              My interpretation is that you would have meta-notes that serve as an index for notes about a given topic. (I do that and it’s very useful) So you might not remember the existing 10,000 notes, but surely you can relate your new note to some topics you already touched, and then you can go look at the index for those, and follow the links that look relevant.

              Then, I guess one assumption is that, while exploring the existing web of notes to connect it to your new note, one should dedicate a bit of time and make some new connections if they feel relevant, i.e. do some general “maintenance” of the system.

              1. 2

                If you completely forgot about something, then you have no trigger to even look for something to link to. However, if you have a vague memory of something, a system of indices can be helpful to refresh your memories.

              2. 3

                It’s about destruction and creation. You deconstruct something to use it’s parts for creating something new.

                1. 2

                  This is exactly the question none of my reading has answered. I haven’t read “Taking Notes the Smart Way”, but I’ve otherwise read a lot, and nothing seems to address that question.

                  I have my own ideas, but I’d love to see how other people solve this.

                  1. 7

                    Yeah I’m starting to think that “do X the smart way” is mostly BS for most values of X.

                    The more I go on the more it seems to me that the system is not important as long as you’re consistent with its use (whether it’s note taking or organising or anything else).

                    Also, the most productive people I’ve seen… They don’t use fancy things that make them “super productive”. They just sit down and do the work, and they got good and fast at it by means of practice.

                    The more I go on with life the stronger I feel the smell of BS.

                    1. 4

                      I gave it some thought and this is what I have:

                      Linking problem is not solved. If it was solved there would not be any need for adding links in the first place. Why add links at all if you can easily find other notes related to the current one? So the fact that links exist leads me to the conclusion that linking is the actual hard part of this method.

                      Second - the whole approach is a hyperlink data base structure before data bases became a thing. This system can be implemented in a single table having 3 columns: 1) ID, 2) Note, 3) References. We have software now so maintaining a data-base in a furniture is probably obsolete.

                      But most importantly, like the article said, Zettelkasten  becomes better the more notes are added to it. OK. So the logical conclusion is to invite other people to add their notes to it. Then it grows faster. Everyone is adding their ideas, and creating links between ideas. To me it seems such a state is the ultimate goal of this system. But we have this now, it’s called the internet.

                      So I think whatever made that one german scholar so productive probably wasn’t the system itself.

                      1. 1

                        You say that as if the internet isn’t a hugely transformative, productivity increasing tool.

                        1. 1

                          I would go as far as to say that for many people it’s the opposite. It can be used as productivity increasing tool and an amazing one at that. But many people use it for other things completely.

                      2. 1

                        I think the theory is that if you’re taking a note, you’ve probably already got a bunch of notes about the thing you’re researching handy, so you link to those. For my part that doesn’t match how I generally take notes (I often find myself making note of something I stumble on, and writing summaries/self-tutorials of stuff I’m studying to ‘learn it’ more thoroughly).

                        That said, the Zettelkasten core idea seems to me to be ‘smaller notes, more often’, and also to follow the old adage: “Graphs are a set of Edges that incidentally have some Vertices attached.” Links and granularity are the key takeaways, and there are good ways to do that that aren’t exactly Zettelkasten.

                        1. 1

                          I’m currently trying TiddlyWiki because its UI encourages smaller notes.

                          With a physical Zettelkasten you implicit see neighbor slips as you search a linked one. This is lost in a digital version where searching is delegated to computer and effectively instant. It would be easy to track all kind of implicit connections (backlinks, created before and after) but how to present that in a helpful way?

                      3. 2

                        I don’t think the system is supposed to answer that question. Connecting ideas is the job of the human, not the system. However, if you do connect some ideas, you don’t have to remember that connection. Your web of thinking is externalized.

                        I’m sure there are strategies. Randomly showing any two cards seems as good as any strategy. This would only increase the probability (marginally) that two ideas get connected. It would still be up to you to figure out how they are connected.

                        1. 4

                          But to create the connection you have to know that the connection is possible, so you have to be aware of the other cards to which you can connect this one. That’s the essence of what a zettelkasten is supposed to be, and what no one is explaining.

                          Here, I’ve read something, I’ve written a short, pithy note. Where do I file it? To what other cards is it connected? How do I find them?

                          I have some ideas, but it’s supposed to be a solved problem via this method, and I’m seeing zero discussion of it in any of my reading.

                          There’s the puzzle.

                          1. 4

                            I think of zettelkasten a bit differently. In database terminology, I see it as normal form applied to ideas. Ideas are often tightly coupled to their original context. This is why it’s natural to apply a hierarchical structure to notes. You just place the note under a folder which represents the original context. In zettelkasten you make the idea atomic and if you want to give it context then you have to reify that context with a link (a foreign key). Zettelkasten isn’t your schema, zettelkasten is relational algebra. You’re free to come up with any schema you want.

                            Where do I file it?

                            In the same folder with everything else. It’s flat. The structure is provided by the links.

                            How do I find them?

                            I use org-roam in emacs. A digital system helps with search. The original zettelkasten had physical organization by topic and you could traverse the links from there. I’ve only been doing this for a few weeks so maybe search might not scale.

                            Btw, I think your questions are great ones. All I’m saying is I don’t think the system has an answer.

                            1. 2

                              Thanks for the reply … I pick up on this:

                              How do I find them?

                              I use org-roam in emacs. A digital system helps with search.

                              I agree that search solves/avoids a lot of the problems that a purely analog system would/did have.

                              But the whole point of a zettelkasten is that it helps with the search, either to avoid it, or to guide it, or to augment it. Luhmann described “having a conversation” with his zettelkasten.

                              So I’m here, I have a new “card” … I can search the existing ZK for cards that have the same words, but that suffers the problem of combinatorial explosion, and it’s not using the zettelkasten in any kind of clever way.

                              I think there are answers to be found, and perhaps Luhmann had some, and even his enthusiasts don’t really “get it” beyond having a huge box of index cards with some sort of indexing system. It feels from my reading that Luhmann had more than that. Otherwise it’s just a wiki with search.

                              As you may be able to tell, I’ve thought about this a lot.

                              1. 1

                                I think the idea is to never make a note without linking to it from some context of an existing note. So you might have a note on when is the best time to do exercise. That note should be referenced from an index card on exercise, or maybe a note on daily routines.

                                I think there is some hope it’ll be like a wiki, but maybe something more like tvtropes or c2 than Wikipedia where all the ideas are enmeshed are easy to move between.

                            2. 3

                              You connect nodes you know of. Since nodes are connected to nodes, this lets you wander the node links. No node contains every link, because that’s not their job. Their job is to link to whatever you still remember so you can later link it to what you’ve long forgotten.

                              1. 2

                                I have some ideas, but it’s supposed to be a solved problem via this method, and I’m seeing zero discussion of it in any of my reading.

                                How to Take Smart Notes is worth a read, IMO, though it is still kind of nebulous on some points.

                                There’s an “Everything You Need to Do” section. It says:

                                Now add your new permanent notes to the slip-box by:

                                a) Filing each one behind one or more related notes (with a program, you can put one note “behind” multiple notes; if you use pen and paper like Luhmann, you have to decide where it fits best and add manual links to the other notes). Look to which note the new one directly relates or, if it does not relate directly to any other note yet, just file it behind the last one.

                                b) Adding links to related notes

                                c) Making sure you will be able to find this note later by either linking to it from your index or by making a link to it on a note that you use as an entry point to a discussion or topic and is itself linked to the index.

                                Presumably, in order to find things to link if you don’t have them ready to hand, you use the existing index.

                                1. 1

                                  The magic seems to lie here:

                                  a) Filing each one behind one or more related notes … b) Adding links to related notes

                                  This is the question no material or article seems to be answering: How does one find “related notes”? It refers to “the index”, but that’s rarely referred to elsewhere and seems utterly mysterious. How are things indexed?

                                  I should write up my musings as best I can to further reveal my confusion and incomprehension. I’ll try to do that.

                                  1. 1

                                    So, full disclosure, I’ve taken notes on cards before, but at the time I gave them either meaningful names or datestamps rather than Luhmann-style sequential identifiers or what-have-you, so I just kept them in a card box sorted by alpha / date. The collection never grew substantial enough for me to think about other forms of indexing. The cards I’ve held onto are in a single cardboard file box with little alphabet dividers I bought at an office supply store.

                                    That said, indexes for paper information storage are a pretty well-established technology.

                                    If I were going to take a crack at it for a paper Zettelkasten, I would:

                                    • Set up a large card box with alphabet dividers.
                                    • When adding a note to my permanent collection:
                                      • Search the index for keywords pertaining to the new note
                                      • If no card exists in the index for the keyword (name, phrase, concept, etc.) I want to be able to track down again:
                                        • Print the keyword on top of a fresh card
                                      • Write down the ID of the related note on any relevant cards (either existing ones or the ones I’ve just created)
                                      • File the cards with keywords alphabetically in my index

                                    Here’s an abbreviated example from a random page of the index in the first reference volume I could find on my office bookshelf, The Chicago Manual of Style:

                                    ornaments for text break, 1.56 
                                    orphans (lines), 2.113, p.899
                                    o.s. (old series), 14.132
                                        basic principles, 6.121
                                        parts of a book, 1.4
                                        publishing process, 2.2, fig. 2.1, fig. 2.2
                                        punctuation and format, 6.94, 6.126
                                        See also: lists
                                    Oxford comma, 6.18-21. See also commas.

                                    This is a ~900 page reference work with fairly dense text, and about a hundred pages of index, so that ought to give you some very rough idea what ratio is useful for a working reference system, and how far an alphabetized index can scale. The references are mostly to section numbers or figures rather than pages, which seems like a pretty useful parallel to how things could work with numbered cards.

                                    The other thing you might want to research is library card catalog approaches. The paper card catalog systems I was taught at length in elementary school have all pretty well been obliterated by electronic databases by now, but there was once a range of well-developed techniques there for indexing into very large collections by author and subject matter.

                                    There’s nothing to stop anyone from translating these techniques directly to software, though there are probably more automated ways to get most of the same benefits in any given system. (i.e., tagging systems, automatic keyword indexing, and good old grep.)

                            3. 1

                              I’m thinking of trying this out and my idea was to try some sort of transitive closure view and “show me five random notes” thingymajiggy.

                              1. 1

                                How on earth could I link it with others that I’ve forgotten about? Surely I cannot re-read all of 10,000 to see which ones I should link together. I do not get this part of it.

                                Practically speaking:

                                • in a paper system, the answer is probably an index and a sorted reference collection
                                • in an electronic system, the answer is some combination of search and tagging

                                Relatedly, the cards (or other unit of note-taking) aren’t intended to be an append-only log. You’re supposed to interact with the system and refine the web of connections as you go, so it may not matter if something is initially orphaned.

                              1. 5

                                Wayland has a protocol? I thought Wayland was supposed to not be like X windows … or am I missing something here?

                                1. 7

                                  Wayland is a protocol (and a C implementation of that protocol).

                                  But Wayland does not have network transparency - the rendering is done directly by the client.

                                  1. 6

                                    Wayland IS a protocol. The protocol defines how clients (typically applications that have a buffer of pixels that they want to display and want to listen to input events) communicate with the server, aka the compositor, which manages the hardware (e.g. output and input devices), displays applications buffers where appropriate on the screen, and delivers events to the appropriate application.

                                    One difference with the X architecture is that there is no universal X server; instead, the compositor combines the role of the X server and the window manager. Reusable components for handling the low level details of the wire protocol or hardware devices exist as separate libraries (for example, libinput, xkbcommon, libwayland), on top of which different compositors can be build.

                                    The wikipedia page looks like a good read for more general information of that kind.

                                    1. 4

                                      To be a little more specific than just “Wayland is a protocol”, let’s quote https://git.sr.ht/~sircmpwn/wayland-book/tree/master/src/protocol-design/wire-protocol.md#transports:

                                      To date all known Wayland implementations work over a Unix domain socket. This is used for one reason in particular: file descriptor messages. Unix sockets are the most practical transport capable of transferring file descriptors between processes, and this is necessary for large data transfers (keymaps, pixel buffers, and clipboard contents being the main use-cases). In theory, a different transport (e.g. TCP) is possible, but someone would have to figure out an alternative way of transferring bulk data.

                                      1. 3

                                        “Protocol” as in “interface”, a set of operations, resources and rules. Not like a network protocol where the interface is designed to map to packets on the wire.

                                        1. 2

                                          It is pretty much a network protocol. For communication between a Wayland client and server, messages are sent over a UNIX domain socket (for reasons that @notriddle cited). The C API just abstracts over that.

                                          1. 2

                                            …I actually just learned this due to /u/notriddle ’s post. Dang.

                                      1. 6

                                        Playing with Wayland and wlroots more, recreating the tinywl example in Rust. Slowly feeling like I’m getting the hang of it, I think. Wayland is intensely C-ish code though, so I occasionally feel like rustc gives me vaguely incredulous looks every time I use an intrusive linked list or conjure a pointer from a struct field into one pointing at the struct containing it. Just giving me big sad puppy-dog eyes and asking “but… but don’t you WANT me to pass this struct in registers?”

                                        Also trying to crank up that whole self-care thing a bit more, since I’ve been feeling kinda crap the last week. More running, more sleep, more playing video games with boyfriend.

                                        1. 1

                                          That’s pretty cool! Incidentally, I’m in the progress of doing the same, but in OCaml. I think it’s really interesting to explore how the C-isms can be expressed differently in a more idiomatic way in higher-level languages. For instance, so far it seems I can replace the use of container_of in event handlers by keeping a pointer to the data in the handler’s function closure instead, which feels a lot more natural.

                                          I still have to complete translating tinywl, and I’m still following the general structure of the C code quite closely, but later on I hope that this will give me ideas for coming up with more “functional” ways of writing these kind of compositors.

                                          1. 2

                                            That IS pretty cool, I wouldn’t have thought of doing it in OCaml. I think I would spend a lot of time worrying about whether the GC will suddenly move something that Wayland or wlroots has stored a pointer to.

                                            That’s a very nice solution to container_of, I have to admit; in Rust I am not sure you can make a closure that’s callable from C, so I don’t think that would work for me. What I can do though is make a type that bundles together the wl_listener and some arbitrary data in a type-safe way. Whether this will turn into a morass of parent pointers and Rc‘s though, I don’t know yet. Like you, I’m just doing a literal translation for now and will try to smooth things out once it works.

                                        1. 50

                                          Honestly I think that suckless page is a terrible criticism of systemd. It’s the kind of rantings that are easy to dismiss.

                                          A much better – and shorter – criticism of systemd is that for most people, it does a lot of stuff they just don’t need, adding to a lot of complexity. As a user, I use systemd, runit, and OpenRC, and I barely notice the difference: they all work well. Except when something goes wrong, in which case it’s so much harder to figure out systemd than runit or OpenRC.

                                          Things like “systemd does UNIX nice” are rather unimportant details.

                                          I’m a big suckless fan, but this is not suckless at their best.

                                          1. 11

                                            A much better – and shorter – criticism of systemd is that for most people, it does a lot of stuff they just don’t need, adding to a lot of complexity.

                                            How many things does the Linux kernel support that you don’t use or need, and how many lines of code in the kernel exist to support those things?

                                            1. 3

                                              If we’re going there, we might as well mention that Linux supports a whole freaking lot of hardware I don’t need. Those are most probably the biggest source of complexity in the kernel. Solving that alone would unlock many other thing, but unfortunately, with the exception of CPUs the interface to hardware isn’t an ISA, it’s an API.

                                              1. 5

                                                If we’re going there, we might as well mention that Linux supports a whole freaking lot of hardware I don’t need.

                                                While, simultaneously, not supporting all the hardware that you want.

                                                I think it’s a good example that the Linux model is culturally inclined to build monolithic software blocks.

                                                1. 7

                                                  While, simultaneously, not supporting all the hardware that you want.

                                                  Ah, that old bias:

                                                  • Hardware does not work on Windows? It’s the hardware vendor’s fault.
                                                  • Hardware does not work on Linux? It’s Linux’s fault.

                                                  We could say the problem is Linux having a small market share. I think the deeper problem is the unbelievable, and now more and more unjustified, diversity in hardware interfaces. We should be able by now be able to specify sane, unified, yet efficient hardware interfaces for pretty much anything. We’ve done it for mouses and keyboards, we can generalise. Even graphics cards, which are likely hardest to deal with because of their unique performance constraints, are becoming uniform enough that standardising a hardware interface makes sense now.

                                                  Imagine the result: one CPU ISA (x86-64, though far from ideal, currently is it), one graphics card ISA, one sound card ISA, one hard drive ISA, one webcam ISA… You get the idea. Do that, and suddenly writing an OS from scratch is easy instead of utterly intractable. Games could take over the hardware. Hypervisors would no longer be limited to bog standard server racks. Performance wouldn’t be wasted on a humongous pile of subsystems most single applications don’t need. Programs could sit on a reliable bed rock again (expensive recalls made hardware vendors better at testing their stuff).

                                                  But first, we need hardware vendors to actually come up with a reasonable and open hardware interface. Just give buffers to write to, and a specification of the data format for those buffers. Should be no harder than to write an OpenGL driver with God knows how many game specific fixes.

                                                  1. 8

                                                    Nah, that’s not what I’m implying. It’s not Linux fault, it’s still a major practical sore from a users perspective. I’m well aware that this is mainly the hardware vendors fault in all cases.

                                                    Also, it should be noted that Linux kernel development is in huge parts driven by exactly those vendors, so even if it were Linux fault, there’s a substantial overlap.

                                                    It’s still amazing how much of hardware is supported in the kernel, of very varying quality and is committed to be maintained.

                                                    1. 3

                                                      It’s still amazing how much of hardware is supported in the kernel, of very varying quality and is committed to be maintained.

                                                      One thing that amused me recently was the addition of SGI Octane support in Linux 5.5, hardware that’s basically extinct since 2 decades and was never particular popular to begin with. But the quixotism of this is oddly endearing.

                                                      1. 2

                                                        was never particular popular to begin with

                                                        Hey, popular isn’t always the best metric. SGI’s systems were used by smart folks to produce a lot of interesting stuff. Their graphics and NUMA architecture were forward-thinking with me still wanting their NUMAlink on the cheap.The Octanes been behind a lot of movies. I think the plane scene in Fight Club was SGI, too. My favorite was SGI’s Onyx2 being used for Final Fantasy given how visually groundbreaking it was at the time. First time I saw someone mistake a CG guy for a real person.

                                                2. 2

                                                  Device drivers are the most modular part of the kernel. Don’t compile them if you don’t want them

                                                  1. 1

                                                    True, but (i) pick & choose isn’t really the default, and (ii) implementing all those drivers is a mandatory, unavoidable part of writing an OS.

                                                    I don’t really care that the drives are there, actually. My real problem is the fact they need to be there. There’s no way to trim that fat without collaboration from hardware vendors.

                                                    1. 1

                                                      Well mainstream distribution kernels are still built modularly and the device driver modules are only loaded if you actually have hardware that needs them, at least as far as I understand it.

                                                      I don’t really care that the drives are there, actually. My real problem is the fact they need to be there. There’s no way to trim that fat without collaboration from hardware vendors.

                                                      Yeah that is a big PITA. It’s getting worse, too. It used to be that every mouse would work with basically one mouse driver. Now you need special drivers for every mouse because they all have a pile of proprietary interfaces for specifying LED colours and colour patterns, different special keys, etc.

                                                3. 3

                                                  a lot, and it’s also a criticism of linux. but sometimes people must use linux, and now sometimes people must use systemd.

                                                  linux’s extra features are also much better modularized and can be left out, unlike systemd’s.

                                                  1. 3

                                                    linux’s extra features are also much better modularized and can be left out, unlike systemd’s.

                                                    But they can. The linked article describes that many of the features that people wrongly claim PID1 now does are just modules. For example you don’t have to use systemd-timesyncd, but you can and it works way better on the desktop than the regular server-grade NTP implementations.

                                                    1. 2

                                                      I’m sorry but how does syncing time every once in a while get much improved by systemd-timesyncd? NTP is like the least of my worries.

                                                      1. 2

                                                        Somehow my computer was insisting on being 2 minutes off and even if I synced manually and wrote to my BIOS RTC clock NTPd and chrony were insisting on messing it up (and then possibly giving up since the jump was 2 minutes). Both these daemons feel like they aren’t good matches on a system that’s not on 24/7.

                                                        1. 2

                                                          sounds like a configuration issue and nothing to do with the program itself. what distro did you use ntpd and chrony with? what distro are you using systemd-timesyncd with?

                                                          by default, void linux starts ntpd with the -g option which allows the first time adjustment to be big.

                                                4. 7

                                                  And there are no real alternatives to a full system layer. I like runit and openrc and I use them both (on my Void laptop and Gentoo desktop). When I use Debian or Ubuntu at work, for the most part I don’t have to worry about systemd, until I try to remember how to pull up a startup log.

                                                  systemctl/journalctl are poorly designed and I often feel like I’m fighting them to get the information I really need. I really just prefer a regular syslog + logrotate.

                                                  It’d be different if dbus had different role endpoints and you could assign a daemon to fulfill all network role messages and people could use NetworkManager or systemd-networking or … same with systemd being another xinitd type provider and everything get funneled through a communication layer.

                                                  Systemd is everything, and when you start going down that route, it’s like AWS really. You get locked in and you can’t easily get out.

                                                  1. 5

                                                    As a note to those reading: there is murmur of creating a slimmed down systemd standard. I think it’d satisfy everyone. Look around and you’ll find the discussions.

                                                    1. 4

                                                      I can’t really find anything about that at a moment’s notice other than this Rust rewrite; is that what you mean?

                                                      Personally, I think a lot of fundamental design decisions of systemd make it complex (e.g. unit files are fundamentally a lot more complex than the shell script approach in runit), and I’m not sure how much a “systemd-light” would be an improvement.

                                                      1. 17

                                                        As someone who just writes very basic unit files (for running IRC bots, etc). I find them a lot simpler than shell scripts. Everything is handled for me, including automatically restarting the thing after a timeout, logging, etc. without having to write shell scripts with all the associated bugs and shortcomings.

                                                        1. 12

                                                          Have you used runit? That does all of that as well. Don’t mistake “shell script approach” with “the SysV init system approach”. They both use shell scripts, but are fundamentally different in almost every other respect (in quite a few ways, runit is more similar to systemd than it is to SysV init).

                                                          As a simple example, here is the entire sshd script:

                                                          ssh-keygen -A >/dev/null 2>&1 # Will generate host keys if they don't already exist
                                                          [ -r conf ] && . ./conf
                                                          exec /usr/bin/sshd -D $OPTS

                                                          For your IRC bot, it would just be something like exec chpst -u user:group ircbot. Personally, I think it’s a lot easier than parsing and interpreting unit files (and more importantly, a lot easier to debug once things go wrong).

                                                          My aim here isn’t necessarily to convince anyone to use runit btw, just want to explain there are alternative approaches that bring many of the advantages that systemd gives, without all the complexity.

                                                          1. 2

                                                            I have never tried it. But then, if it’s a toplevel command, not even in functions, how can you specify dependencies, restart after timeout, etc.? It seems suspiciously too simple :-)

                                                            1. 3

                                                              Most of the time I don’t bother with specifying dependencies, because if it fails then it will just try again and modern systems are so fast that it rarely fails in the first place.

                                                              But you can just wait for a service:

                                                              sv check dhcpcd || (sleep 5; exit 1)
                                                              sv check wpa_supplicant || (sleep 5; exit 1)
                                                              exec what_i_want_to_run

                                                              It also exposes some interfaces via a supervise directory, where you can read the status, write to change the status, roughly similar to /proc. This provides a convenient platform-agnostic API in case you need to do advanced stuff or want to write your own tooling.

                                                              1. 25

                                                                No offense, but this snippet alone convinces me that I’m better off using systemd’s declarative unit files (as I am doing currently, with for similar uses than @c-cube ‘s). I’ve never been comfortable with the shell semantics generally speaking, and overall this feels rather fiddly and hackish. I’d rather just not have to think about it, and have systemd (or anything similar) do it for me.

                                                                1. 5

                                                                  Well, the problem with Unit files is that you have to rely on a huge parser and interpreter from systemd to do what you want, which is hugely opaque, has a unique syntax, etc. The documentation for just systemd.unit(5) it is almost 7,000 words. I don’t see how you can “not have to think” about it?

                                                                  Whereas composition from small tools in a shell script is very transparent, easy to debug, and in essence much easier to use. I don’t know what’s “fiddly and hackish” about it? What does “hackish” even mean in this context? What exactly is “fiddly”?

                                                                  Like I said before, systemd works great when it works, it’s when it doesn’t work. I’ve never been able to debug systemd issues without the help of The Internet because it requires quite specific and deep knowledge, and you can never really be certain if the behaviour is a bug or error on your part.

                                                                  1. 13

                                                                    Well, the problem with Unit files is that you have to rely on a huge parser and interpreter from systemd to do what you want, which is hugely opaque, has a unique syntax, etc. The documentation for just systemd.unit(5) it is almost 7,000 words. I don’t see how you can “not have to think” about it?

                                                                    To me it seems a bit weird to complain about the unit file parser but then just let the oddly unique and terrible Unix shell syntax just get a free pass. If I were to pick which is easier to parse, my money would be on unit files.

                                                                    Plus, each shell command has its own flags and some have rather intricate internal DSLs (find, dd or jq come to mind).

                                                                    1. 2

                                                                      The thing with shell scripts is that they’re a “universal” tool. I’d much rather learn one universal tool well instead of many superficially.

                                                                      I agree shell scripts aren’t perfect; I’m not sure what (mature) alternatives there are? People have talked about Oil here a few times, so perhaps that’s an option.

                                                                      1. 10

                                                                        The thing with shell scripts is that they’re a “universal” tool.

                                                                        But why do you even want that in an init system? The task is to launch processes, which is fairly mundane except having lots of rough edges. With shell scripts you end up reinventing half of it badly and hand-waving away the issues that remain because a nicer solution in shell would be thousands of lines and not readable at all.

                                                                        I would actually my tools to be less turing-complete and give me more things I can reason about. With unit files it is easier to reason about and see that they are correct (since the majority functionality is implemented in the process launcher and if bugs there are fixed it fixes them in all unit files).

                                                                        I actually don’t get the sudden hate for configuration files, since sendmail, postfix, Apache, etc all have their configuration formats instead of launching scripts to handle HTTP, SMTP and whatnot. The only software I have in recent memory that you configure with code is xmonad.

                                                                        1. 1

                                                                          I wrote a somewhat lengthy reply to this this morning, but then my laptop ran out of battery (I’m stupid and forgot to plug it in) so I lost it :-(

                                                                          Briefly: to be honest, I think you’re thinking too much about SysV init style shell scripts. In systems like runit/daemontools, you rarely implement logic in shell scripts. In practice the shell scripts tend to be just a one-liner which runs a program. Almost all of the details are handles by runit, not the shell script, just like with systemd.

                                                                          In runit, launching an external process – which doesn’t even need to be a shell script per-se, but can be anything – is just a way to have some barrier/separation of concerns. It’s interesting you mention postfix, because that’s actually quite similar in how it calls a lot of external programs which you can replace with $anything (and in some complex setups, I have actually replaced this with some simple shell scripts!)

                                                                          I agree the SysV init system sucked for pretty much the same reasons as you said, and would generally prefer systemd over that, but runit is fundamentally different in almost every conceivable way.

                                                                          1. 1

                                                                            Is runit supported by any mainstream distro?

                                                                            1. 1

                                                                              Void uses it by default; I’ve also used it on Alpine Linux and Arch Linux where I just have OpenRC or systemd start runit and then use that for most things.

                                                                        2. 6

                                                                          This is hardly unique to systemd unit files, though.

                                                                          /etc/fstab is a good example of something old. There’s nothing stopping it from being a shell script with a bunch of mount commands. Instead, it has its own file format that’s been ad-hoc extended multiple times, its own weird way of escaping spaces and tabs in filenames (I had to open the manpage to find this; it’s \040 for space and \011 for tab), and a bunch of things don’t wind up using it for various good reasons (you can’t use /etc/fstab to mount /etc, obviously).

                                                                          But the advantage? Since it doesn’t have things like variables and control flow, it’s easy to modify automatically, and basic parsing gives you plenty of useful information. You want to mount a bunch of different filesystems concurrently? Go ahead; there’s nothing stopping you (which is, of course, why systemd replaced all those shell scripts while leaving fstab as-is).

                                                                          In other words: banal argument in favour of declarative file formats instead of scripts.

                                                                      2. 5

                                                                        I don’t know what’s “fiddly and hackish” about it?

                                                                        It’s fiddly, because you can’t use any automatic tool to parse the list of dependencies, and it’s hackish, because the build system doesn’t know what it’s doing, it just retries starting random services until it matches the proper order. It’s nondeterministic, so it’s impossible to debug in case of any problems.

                                                                        1. 3

                                                                          You can pipe it through grep, c’mon dude. And framing it as “starting random services” is just wrong, that’s the opposite of what’s happening.

                                                                          1. 6

                                                                            And framing it as “starting random services” is just wrong, that’s the opposite of what’s happening.

                                                                            This doesn’t look very convincing ;)

                                                                            Well, you can cat the startup script and see the list of dependencies if you’re only worried by one machine. But from the point of view of a developer, supporting automatic parsing of such startup scripts is impossible, because it’s defined by a Turing-complete language.

                                                                            Again, it’s still fine if you’re an administrator of just one machine (i.e. you’re the only user). But it’s not an optimized method when you have farms of servers (physical or VMs), and that’s the majority of cases where UNIX systems are used.

                                                                            Also it’s easier to install some rootkit inside shell scripts, because it’s impossible to reliably scan a bash script for undesirable command injections.

                                                                        2. 3

                                                                          While I agree that systemd.unit files syntax sometimes is weird and I would much more prefer for it to use for example TOML instead, I do not think that shell syntax is any better. TBH it is even more confusing sometimes (as @Leonidas said).

                                                                        3. 6

                                                                          I’d rather just not have to think about it, and have systemd (or anything similar) do it for me.

                                                                          Don’t be surprised when you pay the price this thread speaks of for the privilege of thinking slightly less ;)

                                                                          1. 10

                                                                            Sometimes abstractions are, in fact, good. I am glad I don’t have to think how my CPU actually works. And starting services is such a run-off the mill job that I don’t want to write a program that will start my service, I just want to configure how to do it.

                                                                          2. 3

                                                                            Dependencies in general are a mistake in init systems: Restarting services means that your code needs to handle unavailability anyways – so use that to simplify the init system. As a bonus, you ensure that the code paths to deal with dependencies restarting gets exercised.

                                                                2. 2

                                                                  I really like systemd’s service files for the simple stuff I need to do with them (basically: execute daemon command, set user/group permission, working dir, dependencies, PID file location, that’s it). But there are other aspects of systemd I dislike. I wish someone would implement a service file parser for something like OpenRC that supports at least those basic systemd service files. It would ease cooperation among init systems quite a bit I think and make switching easier. It would also ease the life of alternative init system makers, because many upstream projects provide systemd service files already.

                                                                3. 4

                                                                  A much better – and shorter – criticism of systemd is that for most people, it does a lot of stuff they just don’t need, adding to a lot of complexity.

                                                                  This sort of computing minimalism confuses me. Should we say the exact same about the computing platforms themselves? x86 has a lot of things we don’t need so we should simply use a RISC until you need just the right parts of x86… That motherboard has too many PCI slots, I’m going to have to rule it out for one with precisely the right amount of PCI slots… If you can accomplish the task with exactly a stick and a rock why are you even using a hammer, you fool!

                                                                  1. 3

                                                                    It’s really a long-standing principle in engineering to make things as simple as feasible and reduce the number of moving parts. It’s cheaper to produce, less likely to break, easier to fix, etc. There is nothing unique about software in this regard really.

                                                                    I never claimed to be in favour of absolute minimalism over anything else.

                                                                    1. 4

                                                                      It’s not ‘minimalism’ that makes me balk at systemd’s complexity. It’s that that complexity translates directly to security holes.

                                                                  1. 9

                                                                    Here are the results for OCaml, which I think are pretty interesting!

                                                                    $ cat hello.ml
                                                                    let () = print_endline "hello world"

                                                                    With a dynamically linked glibc:

                                                                    $ ocamlopt -o hello hello.ml
                                                                    • Resulting binary size (after strip): 279K
                                                                    • Syscalls: 69, Unique syscalls: 17

                                                                    With a dynamically linked musl:

                                                                    $ ocamlopt -cc musl-gcc -o hello hello.ml
                                                                    • Resulting binary size (after strip): 219K
                                                                    • Syscalls: 24, Unique syscalls: 13

                                                                    With a statically linked musl:

                                                                    $ ocamlopt -cc musl-gcc -ccopt -static -o hello hello.ml
                                                                    • Resulting binary size (after strip): 262K
                                                                    • Syscalls: 23, Unique syscalls: 12

                                                                    These results are pretty close to the numbers for C (!), so I guess that would put OCaml somewhere between C and Rust in the blogpost table…