I was surprised that this stopped at the individual machine boundary when 80% of these responsibilities apply to any “datacenter OS” like Kubernetes. Including how k8s only does a complete job when combined with other services that are only loosely affiliated.
Seems like any modern theory of “system layer” needs to account for distributed systems and ideally pave the way to unifying the abstractions and extending them beyond a single motherboard.
Yes indeed. Distributed systems share many many of these same concerns. There are a few reasons why I did not include distributed systems in the article.
First, I don’t personally have enough experience of k8s yet; I’ve used it, and it’s obvious that lots of the issues appear there, but I haven’t dug deeply enough into it to truly understand it. Yet.
Second, this is part of a research arc that started with organising components within a single program (Syndicate as a language; my dissertation) and has continued with organising components within a single system (Synit and the Syndicate Protocol; this work). The next step is to look into organising components in distributed systems of various kinds: personal overlay networks, across a LAN, across a WAN, in a k8s service-cluster style, etc.
Finally, thinking about what a system layer is – things applications generally need that aren’t supplied by the kernel – there’s an interesting research question around k8s and friends. What is the kernel? What is the kernel API? What is part of the kernel and what is part of the system layer? Without a good answer to that I’d be a little uncomfortable trying to include k8s in an analysis of system layers per se.
Thanks for reading the article, and thanks for commenting. The k8s/docker/containers/distributed question is fascinating to me and I’m looking forward to working more on it.
That sounds like a very sensible progression. It feels to me that we need to regard the machine boundary as just an element of a larger system now. Just as the simple “kernel vs. userland” model is no longer enough, you can no longer regard each machine as an independent island of application processes. There are essential “system services” whose scope is bigger than individual machines, because individual applications are now bigger than single machines.
When you get into things like what k8s/containerd/Istio do with networking to get a service mesh going, and how k8s schedules pods onto nodes, I think you’ll see what I mean.
I was surprised that this stopped at the individual machine boundary when 80% of these responsibilities apply to any “datacenter OS” like Kubernetes. Including how k8s only does a complete job when combined with other services that are only loosely affiliated.
Seems like any modern theory of “system layer” needs to account for distributed systems and ideally pave the way to unifying the abstractions and extending them beyond a single motherboard.
Yes indeed. Distributed systems share many many of these same concerns. There are a few reasons why I did not include distributed systems in the article.
First, I don’t personally have enough experience of k8s yet; I’ve used it, and it’s obvious that lots of the issues appear there, but I haven’t dug deeply enough into it to truly understand it. Yet.
Second, this is part of a research arc that started with organising components within a single program (Syndicate as a language; my dissertation) and has continued with organising components within a single system (Synit and the Syndicate Protocol; this work). The next step is to look into organising components in distributed systems of various kinds: personal overlay networks, across a LAN, across a WAN, in a k8s service-cluster style, etc.
Finally, thinking about what a system layer is – things applications generally need that aren’t supplied by the kernel – there’s an interesting research question around k8s and friends. What is the kernel? What is the kernel API? What is part of the kernel and what is part of the system layer? Without a good answer to that I’d be a little uncomfortable trying to include k8s in an analysis of system layers per se.
Thanks for reading the article, and thanks for commenting. The k8s/docker/containers/distributed question is fascinating to me and I’m looking forward to working more on it.
That sounds like a very sensible progression. It feels to me that we need to regard the machine boundary as just an element of a larger system now. Just as the simple “kernel vs. userland” model is no longer enough, you can no longer regard each machine as an independent island of application processes. There are essential “system services” whose scope is bigger than individual machines, because individual applications are now bigger than single machines.
When you get into things like what k8s/containerd/Istio do with networking to get a service mesh going, and how k8s schedules pods onto nodes, I think you’ll see what I mean.