1. 36
  1.  

  2. 5

    Due to its requirements for small and clean software, and its increasing prevalence in container images, Alpine is a typical case where this new service manager would be useful.

    What am I missing here? Shouldn’t the vast majority of containers just start a single binary and no amount of init or “service manager” should be involved? Reading further it might just be an unlucky combination of features into one sentence though…

    1. 4

      Often, even in containers, there are other processes spawned for some work. If that is a case, then there is a problem that each of these processes can leave zombies which will not be reaped if there is no reaper process spawned. Docker try to resolve that by adding --init flag that will start the additional reaper within process, however that require operator to know such problem and that such option exist.

      1. 2

        Also, as well as secondary processes (e.g. log-rotation) there are also primary processes that are by default daemonized (e.g. servers or nodes), or running a process in an outer context, such as a shell, to get things such as error details on crash, or failure handling / retries, etc.

        1. 1

          Interesting, I find it surprisingly hard to research this exact topic, especially how is this decided or do you mean a specific container setup?

          I was under the impression there wasn’t a huge difference and the “main binary” of the container is executed and there is nothing special except if you run a supervisor.

          1. 2

            Here you can read why even with “single main binary” you may want to run “init” in the container.

        2. 3

          There’s a tension between separation of concerns and ease of deployment. Consider something like Nextcloud. There are two obvious ways of deploying it:

          • Put Nextcloud, PHP, nginx, postgresql into a container.
          • Create a Nextcloud + PHP container that expects to communicate with nginx and postgresql containers.

          The first is very attractive from a distribution perspective: you have a single container that has a known-good configuration and is easy to deploy. The second fits better with a microservices view of the world and makes it easy for your container orchestration framework to manage each service. All of your ‘init’ logic is in the container orchestration framework.

          Container deployments hit the same thing that exokernels and a load of other systems have hit in the past: Isolation is easy, sharing is hard.

          1. 1

            Maybe I should’ve clarified “vast majority”, in my experience reading about and doing containers most people would only see your 2nd example as a “valid” container deployment. I’ve done #1, when it made sense, but usually only begrudgingly, or for test instances.

            So if we only talk about #2, I don’t see new processes being created and stopped unless your main binary happens to fork a lot. Of course for #1 it makes sense, but I wonder how widespread it really is. As in “is this common enough to warrant taking it into account when writing an init system”.

        3. 4

          I value external inputs such as technical discussions and user experience returns immensely

          I, for one, really hope this iteration of s6 will not depend on the authors personal collection of useful functions / djb-style NIH libc reimplementation (“skalibs”).

          Otherwise I have nothing but appreciation for the great amount of work put into s6. s6 is art. Together with apkv3 I will finally have enough reasons to switch all my machines from Void to Alpine.

          1. 2

            hope this iteration of s6 will not depend on the authors personal collection of useful functions / djb-style NIH libc reimplementation (“skalibs”)

            Why not? Many parts of libc are fraught.

            1. 1

              Because I don’t want every application authors’ take on which parts are fraught to end up on my system in the form of a shared library (distributions still haven’t figured out that static linking is the way to go).

            2. 2

              s6 is art

              Maybe I’m not good at appreciating art, but every time I look at s6 I’m like “why is this so much more complex than runit??”

            3. 3

              Proper declarative unit files go so much deeper than being “syntax sugar” for templating scripts. My view is that any new service manager should be fundamentally based on the concept of repeatedly attempting to reconcile the current state with what has been declared (think reconciliation loops in Kubernetes, or running something like Terraform in a loop).

              Designing such a format constrains what can be done inside of these service definitions - but that isn’t a problem, it’s a feature. Long-term moving towards more homogenous setups for all software will chip away at the accidental complexity that is currently part of building any service.

              On a different note, I have a hard time getting excited for another important, highly-privileged system component being written in a memory unsafe language …