Though I think “the Unix philosophy” deserves some more analysis, since it encompasses several related but distinct design choices. The blog post mentions coding the perimeter of M data formats and N operations; it also mentions the C language as a narrow waist for portablity. I would also add:
the syscall API and ABI – a narrow waist between applications and hardware. Notably, there is so much economic pressure here that Windows implemented POSIX in the 90’s, and it implemented Linux in the 2010’s with WSL (and I guess they did it again with WSL2 because the first cut was slow?)
The file system is an important special case of this. Notably, it’s suboptimal in many circumstances, like using NVMe hardware, but it’s still rational to have a “universal” API or lowest common denominator
So teasing apart these issues might enlighten us on how exactly to apply it to the cloud. The 3 sentences by McIlroy, are important, but not the whole picture IMO. I’m thinking of framing it as “the Perlis-Thompson principle”, and “narrow waists”, though this thinking/writing is still in its earlier stages.
I actually got an e-mail from Multics engineer Tom Van Vleck regarding my most recent blog post! That reply helped to shape my thinking, to the point where I’d say that a key part of the Unix philosophy is choosing the SECOND of these strategies:
Static Data Types and Schemas (especially “parochial” types, as Rich Hickey puts it)
Semi-structured text streams, complemented by regexes/grammars to recover structure
I’d say that the current zeitgiest is biased toward #1, and Multics is more along the lines of #1. But the design that scales and evolves gracefully is actually #2! I know lots of people disagree with that, which is why I’m blogging about it. (It wouldn’t be interesting if everyone agreed.)
Another part of the Unix philosophy is to be data-centric rather than code-centric. For the cloud, that means it should be protocol-centric, not service-centric. The CNCF diagram that people criticize is just a bunch of services with no protocols, which is exactly backwards IMO. It’s a brittle design that doesn’t evolve.
Unix is of course data-centric, while proprietary OSes like Windows and iOS are very code-centric. Proprietary cloud platforms are code-centric for the same reasons.
And there are a bunch of related arguments around codebase scaling and composition. This recent post by @mpweiher also references the Unix vs. Google video, and says that glue is O(N^2) between N features that interact:
I watched the video again and it’s explicitly making the O(M + N) vs O(M * N) comparison. I think these are related issues but working through bunch of examples might enlighten us.
I also very much like the microservices vs. auth/metrics/config/monitoring/alerting matrix in this post. I’ve definitely felt that, and it does seem to be a huge problem with the cloud.
IMO we’re still missing the equivalent of an “ELF file” and “process” in the cloud. I think OCI is making progress in that area. Docker again makes the “mistake” of being code-centric rather than data-centric (in quotes because it was an intentional design decision.)
If we were to make a list of the “narrow, deep, and stable” interfaces that compose to make a Unix-like system, we could start with the ones you mention:
Syscalls (not so narrow these days, but they were)
File interface
Bytestream as data
Those seem to work well together. Then I see a whole pile of “bolt on” interfaces that don’t compose nicely, sometimes overlap, and create a bunch of corner cases:
termios
Process groups
Signals
ioctls
IPC
fbdev
io_uring
epoll
kqueue
… and a bunch more
It’s like we’ve spent a five decades accreting new interfaces that were wide but shallow, or wide and deep, but not converging on other narrow, deep, and stable interfaces.
Yeah I think “narrow and deep” is referring to Ousterhout’s recent book? I remember he specifically uses the Unix file system API as one of his examples.
I agree classic Unix adheres to the principle better. Linux is pretty messy, although often it gets functionality before other Unixes.
So yeah I think we have sort of a “design deficit” now. Unix had good bones and we built on it for 50 years. But I think it’s probably time to do some rethinking and redesign to have a stable foundation for more evolution. The cloud is not in good shape now … and part of that is being built on unstable foundations (e.g. the mess of Linux container mechanisms)
Hey Andy, Thanks for the detailed thoughts here. There are probably at least 5 more blog posts I need to write that you’ve touched upon in this comment :) Stay tuned!
Unix is of course data-centric, while proprietary OSes like Windows and iOS are very code-centric.
Can you expand on this? My experience with modern Windows (read: PowerShell and modern .NET services) is that most tasks require very little work to get done. Structured data can be sent trivially between hosts in the shell and acting on that structured data in the shell is nearly trivial as you don’t really need to write any glue code to pipe data from cmdlet to cmdlet. Tools like sed, awk, and bash feel positively archaic in comparison to PowerShell.
Importantly, the claim isn’t that data-centric is better than code-centric along all dimensions! It’s a tradeoff. You can argue that the code-centric / API-centric Windows style is easier to use. Types do make things easier to use locally.
What I’m arguing is that the code-centric design ends up with more code globally (something of a tautology :) ), and that is bad in the long run. And also that it creates problems of composition. You end up with quadratic amounts of glue code.
Although I don’t have a reference for this, it seems obvious to me that a Unix system has less code than Windows and is “simpler” to understand (think something like xv6). It’s not necessarily easier to use. You can build a lot of things on top of it, and that has been done, from iOS/Android to embedded to supercomputers.
For a concrete example of being data-centric, I’d use /etc/passwd vs. however Windows stores the user database. I assume it must be in the registry, but it’s not a stable format? You use some kind of C API or .NET VM API or PowerShell API to get at it?
Again I’m not claiming that it’s great along all dimensions, only that it’s minimal and simple to understand :) You can parse it from multiple languages. Although there are also libc wrappers because parsing in C is a pain. (Multiple ways of accessing it is a feature not a bug; that’s a consequence of being data-centric.)
I hope that helps; other questions are welcome and may help the blog posts on these topics :)
I think the /etc/passwd example also highlights the limitations of this type of approach, though - in practice, on Linux, it is necessary to use higher-level APIs like PAM and NSS to interact with user information if you want to support other cases like LDAP or Kerberos user databases.i This situation can become remarkably painful on Linux exactly because some applications are “too aware of the data” and make assumptions about where it comes from that don’t hold on all systems.
The data-centric nature of Unix requires that applications have a deeper understanding of the actual data, e.g. the fact that while users/groups often come from /etc/passwd there are several other places they can come from as well. The more code-centric approach in Windows does a better (although not perfect) job of abstracting this so that application developers don’t need to worry about various system configurations.
Or in short: while Linux has the simply structured /etc/passwd interacting with it directly is almost always a bad idea. Instead you should probably use PAM, just like on Windows you would end up using SAM via various APIs. This feels like a fundamental limitation of a highly data-centric approach: it makes variation in the data source and format difficult to handle.
Yeah PAM and NSS are interesting points. Same with the weirdness that seems to go on with DNS and name lookup these days. It’s mostly done through libc and plugins as far as I remember, and is far beyond /etc/hosts.
Though again I’m not saying that the data-centric approach is cleaner or nicer to use! I’m saying it scales and evolves better :)
If you’ve ever seen how Windows is used on say a digital sign or a voting machine, then that’s a picture of what I’m getting at. Windows is not very modular or composable. It’s mostly a big blob that you can take or leave. (I have seen and had success with COM; I’d say that’s more the exception than the rule.)
If you need to use one PoweShell cmdlet then I believe you also need the whole .NET VM (and probably auto-updating, etc.)
There are plenty of embedded devices (routers, things with sensors) that just use /etc/passwd. Ditto for containers. Those systems don’t use LDAP or Kerberos so the simple fallback is still used. I doubt you can make a “Windows container” as small as a Linux container, and that does matter for certain use cases.
There is a just a lot of diversity to the use cases, and that involves some messiness. I may concede that the data-centric approach is harder to use; what I wouldn’t concede without more argument is that it’s a bad idea :) I’d actually say it’s less limited, but is possibly harder to use.
I plan to write about these are two contrasting approaches to OS design:
write typed wrappers or APIs for everything
make the data formats more well-defined and improve parsing tools.
Most people want #1 but I believe that #2 has desirable properties, including more graceful evolution and scaling.
While your complaint about Windows not being very “subsettable” is true to a good extent, Microsoft does produce Windows Embedded and has invested significantly more in it over the past few years, relaunching it as Windows IoT. A minimum Windows IoT image is not nearly as compact in terms of storage as a minimal Linux image (they say you need 2GB of storage), but it does solve most of the classic failures of Windows on non-general devices by making nearly the entire operating system optional via a modular build. I haven’t dealt with Windows Embedded for some years but when I was doing some experimental work with XP-era Embedded, the network stack was an option you could leave out of your image, for example.
The problem is that Windows Embedded and now Windows IoT see next to zero adoption, which I think reflects the motives of the companies that build these kinds of devices: they want a heavy, feature-complete, general-purpose operating system, because it’s easier to develop and test on those than it is on a minimal OS. Containers and etc have reduced the gap in ease of use here but it still definitely exists, we’ve all dealt with the at least frustration of trying to figure out an issue on an embedded device only to discover it doesn’t have some tool we’ve come to expect like sed. I think the Windows developer base has just become extremely used to all targets being complete systems they can TeamViewer to and poke around like their own laptop, which is why we still see billboards running Windows 10 Pro. I think a lot of MS’s strategy around PowerShell for example is trying to turn that ship around, for example with Windows Server now generally not having a GUI until you force it to install one (via PowerShell session).
I guess what I’m arguing is that the difference here is, in my opinion, less technical than it is cultural. There aren’t a lot of technical aspects of Windows that require that it be a more complex environment, but Windows is usually targeted by desktop developers who are only used to working with complete desktop systems. Embedded devices tended to end up with Linux because the open-source kernel could be built for unusual architectures, while containerization basically fell out of features of the Linux kernel that Microsoft failed to compete with—but these features are all modern additions to the kernel that use fairly structured APIs, like most newer additions.
I don’t meant to be too argumentative, I think you do have a point that I agree with, I just think the actual situation is a lot blurrier than UNIX derivatives having gone one route and Windows the other - both platforms contain a huge number of counterexamples. A core part of the Windows architecture, the registry, is a well-structured data store. Linux GUI environments are just as dominated by API-mediated interactions as Windows, and the whole “everything is a file” concept usually ends when you go into graphics mode. Which perhaps goes to explain why all these graphics-centric applications like kiosks tend to be running Windows… Linux doesn’t really get you that many advantages in the graphical world if you want to have the comforts of modern GUI development, which tend to require bringing along the whole Gtk ecosystem of services and APIs if not something like Electron.
Hm yeah I don’t really see any disagreement here? The Windows Embedded / IoT cases seem to support my point.
The point is basically that Windows and Unix (and Multics and Unix) have fundamentally different designs, and this has big consequences. They scale, evolve, and compose differently because of it.
This is both technical and cultural. Being data-centric is one value / design philosophy that Unix has; another is using language-oriented composition (textual data formats and the shell).
I hope to elaborate a lot on the Oil blog, and will be interested in comments from people with Windows expertise. The SSH example I gave in another thread is interesting to me too (e.g. compare how Windows does it)
The article’s point is valid, if a little platitudinous (yes, code reuse is good). But:
And they utilize universal interfaces: TCP, HTTP, etc.
Isn’t this utterly wrong? TCP and HTTP are nowhere near universal interfaces. Large amounts of the internet goes over UDP and QUIC, and there are many many different RPC frameworks out there.
TCP and HTTP might be common for moderate-performance high-latency moderate-reliability microservices, which is a large number of services. But if we want to solve the problem, we need something more universal than just the 70% solution. I don’t know how to achieve that level of universality (though I have some ideas) but it’s the goal we need to achieve. Though we’ll probably hit it accidentally - there were many operating systems before and after Unix which failed, and they were all trying to win - merely trying isn’t enough, we need to get lucky too.
Write programs to handle text streams, because that is a universal interface.
Emphasis on the “a” is mine.
There is no such thing as an absolute, truly universal interface. It’s a matter of degrees. Text streams certainly weren’t universal for the longest time! We had to standardize ASCII, and later unicode, etc. We’ve since standardized HTTP and it’s certainly more universal than something like a JVM method call.
Isn’t this utterly wrong? TCP and HTTP are nowhere near universal interfaces. Large amounts of the internet goes over UDP and QUIC, and there are many many different RPC frameworks out there.
“Universal” is obviously not precisely defined, but if you’re gonna pick winners for OSI layers 4 and 7, I think the answers are pretty clearly TCP and HTTP respectively. Very little of the internet goes over UDP and QUIC at the moment.
Large amounts of the internet goes over UDP and QUIC, and there are many many different RPC frameworks out there.
The quote is from someone’s post above. I didn’t know this to be true, I believe TCP/HTTP are still the majority of the internet “protocols” used today.
I don’t agree it’s platitudinous because people nod their head “yes” when reading it, but when they sit down at their jobs, they code the area and not the perimeter :)
I see what’s meant by “universal”, but I also see that it’s a misleading/vague term. I would instead say that TCP and HTTP are the “lowest common denominator” or “narrow waist”, and that has important scaling properties.
A related line of research is “HTTP: An Evolvable Narrow Waist for a Future Internet” (2010).
QUIC and HTTP/2 seem to be pushing in that direction. Basically TCP/IP was explicitly designed the narrow waist of the Internet (I traced this to Kleinrock but I’m still in the middle of the research), but it’s moving toward HTTP.
As far as I understand, QUIC is more or less a fast parallel transport specifically for HTTP. While HTTP is now the thing that supports diverse applications. For example, e-mail, IRC, NNTP are subsumed by either HTTP gateways or simply web apps. As far as I can see, most mobile apps speak HTTP these days as opposed to TCP/IP with raw sockets.
Other names: “hourglass model”, “thin waist”, and “distinguished layer”:
Basically this software architecture concept spans all of networking; compilers and languages; and (distributed) operating systems. But the terminology is somewhat scattered and not everyone is talking to each other.
But again, there’s something profound here that has big practical consequences; it’s not platitudinous at all.
Huh. Can confirm it’s centered on FF. I’m on
Mozilla/5.0 (X11; Linux x86_64; rv:56.0) Gecko/20100101 Firefox/56.0 Waterfox/56.5, that is, Waterfox Classic.
I like this line of thought, and agree that the cloud is still in the poorly composing Multics stage (https://news.ycombinator.com/item?id=27903720)
Though I think “the Unix philosophy” deserves some more analysis, since it encompasses several related but distinct design choices. The blog post mentions coding the perimeter of M data formats and N operations; it also mentions the C language as a narrow waist for portablity. I would also add:
So teasing apart these issues might enlighten us on how exactly to apply it to the cloud. The 3 sentences by McIlroy, are important, but not the whole picture IMO. I’m thinking of framing it as “the Perlis-Thompson principle”, and “narrow waists”, though this thinking/writing is still in its earlier stages.
I actually got an e-mail from Multics engineer Tom Van Vleck regarding my most recent blog post! That reply helped to shape my thinking, to the point where I’d say that a key part of the Unix philosophy is choosing the SECOND of these strategies:
I’d say that the current zeitgiest is biased toward #1, and Multics is more along the lines of #1. But the design that scales and evolves gracefully is actually #2! I know lots of people disagree with that, which is why I’m blogging about it. (It wouldn’t be interesting if everyone agreed.)
Another part of the Unix philosophy is to be data-centric rather than code-centric. For the cloud, that means it should be protocol-centric, not service-centric. The CNCF diagram that people criticize is just a bunch of services with no protocols, which is exactly backwards IMO. It’s a brittle design that doesn’t evolve.
Unix is of course data-centric, while proprietary OSes like Windows and iOS are very code-centric. Proprietary cloud platforms are code-centric for the same reasons.
And there are a bunch of related arguments around codebase scaling and composition. This recent post by @mpweiher also references the Unix vs. Google video, and says that glue is O(N^2) between N features that interact:
https://lobste.rs/s/euswuc/glue_dark_matter_software
I watched the video again and it’s explicitly making the O(M + N) vs O(M * N) comparison. I think these are related issues but working through bunch of examples might enlighten us.
https://lobste.rs/s/euswuc/glue_dark_matter_software#c_sppff7
I also very much like the microservices vs. auth/metrics/config/monitoring/alerting matrix in this post. I’ve definitely felt that, and it does seem to be a huge problem with the cloud.
IMO we’re still missing the equivalent of an “ELF file” and “process” in the cloud. I think OCI is making progress in that area. Docker again makes the “mistake” of being code-centric rather than data-centric (in quotes because it was an intentional design decision.)
Interesting ideas!
If we were to make a list of the “narrow, deep, and stable” interfaces that compose to make a Unix-like system, we could start with the ones you mention:
Those seem to work well together. Then I see a whole pile of “bolt on” interfaces that don’t compose nicely, sometimes overlap, and create a bunch of corner cases:
It’s like we’ve spent a five decades accreting new interfaces that were wide but shallow, or wide and deep, but not converging on other narrow, deep, and stable interfaces.
Yeah I think “narrow and deep” is referring to Ousterhout’s recent book? I remember he specifically uses the Unix file system API as one of his examples.
I agree classic Unix adheres to the principle better. Linux is pretty messy, although often it gets functionality before other Unixes.
I commented on that here: https://lobste.rs/s/kj6vtn/it_s_time_say_goodbye_docker#c_nbe2co
So yeah I think we have sort of a “design deficit” now. Unix had good bones and we built on it for 50 years. But I think it’s probably time to do some rethinking and redesign to have a stable foundation for more evolution. The cloud is not in good shape now … and part of that is being built on unstable foundations (e.g. the mess of Linux container mechanisms)
Hey Andy, Thanks for the detailed thoughts here. There are probably at least 5 more blog posts I need to write that you’ve touched upon in this comment :) Stay tuned!
Can you expand on this? My experience with modern Windows (read: PowerShell and modern .NET services) is that most tasks require very little work to get done. Structured data can be sent trivially between hosts in the shell and acting on that structured data in the shell is nearly trivial as you don’t really need to write any glue code to pipe data from cmdlet to cmdlet. Tools like sed, awk, and bash feel positively archaic in comparison to PowerShell.
Importantly, the claim isn’t that data-centric is better than code-centric along all dimensions! It’s a tradeoff. You can argue that the code-centric / API-centric Windows style is easier to use. Types do make things easier to use locally.
What I’m arguing is that the code-centric design ends up with more code globally (something of a tautology :) ), and that is bad in the long run. And also that it creates problems of composition. You end up with quadratic amounts of glue code.
Although I don’t have a reference for this, it seems obvious to me that a Unix system has less code than Windows and is “simpler” to understand (think something like xv6). It’s not necessarily easier to use. You can build a lot of things on top of it, and that has been done, from iOS/Android to embedded to supercomputers.
For a concrete example of being data-centric, I’d use
/etc/passwd
vs. however Windows stores the user database. I assume it must be in the registry, but it’s not a stable format? You use some kind of C API or .NET VM API or PowerShell API to get at it?TAOUP has some comments on the /etc/passwd format: http://www.catb.org/~esr/writings/taoup/html/ch05s01.html#passwd
Again I’m not claiming that it’s great along all dimensions, only that it’s minimal and simple to understand :) You can parse it from multiple languages. Although there are also libc wrappers because parsing in C is a pain. (Multiple ways of accessing it is a feature not a bug; that’s a consequence of being data-centric.)
I hope that helps; other questions are welcome and may help the blog posts on these topics :)
I think the
/etc/passwd
example also highlights the limitations of this type of approach, though - in practice, on Linux, it is necessary to use higher-level APIs like PAM and NSS to interact with user information if you want to support other cases like LDAP or Kerberos user databases.i This situation can become remarkably painful on Linux exactly because some applications are “too aware of the data” and make assumptions about where it comes from that don’t hold on all systems.The data-centric nature of Unix requires that applications have a deeper understanding of the actual data, e.g. the fact that while users/groups often come from
/etc/passwd
there are several other places they can come from as well. The more code-centric approach in Windows does a better (although not perfect) job of abstracting this so that application developers don’t need to worry about various system configurations.Or in short: while Linux has the simply structured
/etc/passwd
interacting with it directly is almost always a bad idea. Instead you should probably use PAM, just like on Windows you would end up using SAM via various APIs. This feels like a fundamental limitation of a highly data-centric approach: it makes variation in the data source and format difficult to handle.Yeah PAM and NSS are interesting points. Same with the weirdness that seems to go on with DNS and name lookup these days. It’s mostly done through libc and plugins as far as I remember, and is far beyond
/etc/hosts
.Though again I’m not saying that the data-centric approach is cleaner or nicer to use! I’m saying it scales and evolves better :)
If you’ve ever seen how Windows is used on say a digital sign or a voting machine, then that’s a picture of what I’m getting at. Windows is not very modular or composable. It’s mostly a big blob that you can take or leave. (I have seen and had success with COM; I’d say that’s more the exception than the rule.)
If you need to use one PoweShell cmdlet then I believe you also need the whole .NET VM (and probably auto-updating, etc.)
There are plenty of embedded devices (routers, things with sensors) that just use /etc/passwd. Ditto for containers. Those systems don’t use LDAP or Kerberos so the simple fallback is still used. I doubt you can make a “Windows container” as small as a Linux container, and that does matter for certain use cases.
There is a just a lot of diversity to the use cases, and that involves some messiness. I may concede that the data-centric approach is harder to use; what I wouldn’t concede without more argument is that it’s a bad idea :) I’d actually say it’s less limited, but is possibly harder to use.
I plan to write about these are two contrasting approaches to OS design:
Most people want #1 but I believe that #2 has desirable properties, including more graceful evolution and scaling.
While your complaint about Windows not being very “subsettable” is true to a good extent, Microsoft does produce Windows Embedded and has invested significantly more in it over the past few years, relaunching it as Windows IoT. A minimum Windows IoT image is not nearly as compact in terms of storage as a minimal Linux image (they say you need 2GB of storage), but it does solve most of the classic failures of Windows on non-general devices by making nearly the entire operating system optional via a modular build. I haven’t dealt with Windows Embedded for some years but when I was doing some experimental work with XP-era Embedded, the network stack was an option you could leave out of your image, for example.
The problem is that Windows Embedded and now Windows IoT see next to zero adoption, which I think reflects the motives of the companies that build these kinds of devices: they want a heavy, feature-complete, general-purpose operating system, because it’s easier to develop and test on those than it is on a minimal OS. Containers and etc have reduced the gap in ease of use here but it still definitely exists, we’ve all dealt with the at least frustration of trying to figure out an issue on an embedded device only to discover it doesn’t have some tool we’ve come to expect like sed. I think the Windows developer base has just become extremely used to all targets being complete systems they can TeamViewer to and poke around like their own laptop, which is why we still see billboards running Windows 10 Pro. I think a lot of MS’s strategy around PowerShell for example is trying to turn that ship around, for example with Windows Server now generally not having a GUI until you force it to install one (via PowerShell session).
I guess what I’m arguing is that the difference here is, in my opinion, less technical than it is cultural. There aren’t a lot of technical aspects of Windows that require that it be a more complex environment, but Windows is usually targeted by desktop developers who are only used to working with complete desktop systems. Embedded devices tended to end up with Linux because the open-source kernel could be built for unusual architectures, while containerization basically fell out of features of the Linux kernel that Microsoft failed to compete with—but these features are all modern additions to the kernel that use fairly structured APIs, like most newer additions.
I don’t meant to be too argumentative, I think you do have a point that I agree with, I just think the actual situation is a lot blurrier than UNIX derivatives having gone one route and Windows the other - both platforms contain a huge number of counterexamples. A core part of the Windows architecture, the registry, is a well-structured data store. Linux GUI environments are just as dominated by API-mediated interactions as Windows, and the whole “everything is a file” concept usually ends when you go into graphics mode. Which perhaps goes to explain why all these graphics-centric applications like kiosks tend to be running Windows… Linux doesn’t really get you that many advantages in the graphical world if you want to have the comforts of modern GUI development, which tend to require bringing along the whole Gtk ecosystem of services and APIs if not something like Electron.
Hm yeah I don’t really see any disagreement here? The Windows Embedded / IoT cases seem to support my point.
The point is basically that Windows and Unix (and Multics and Unix) have fundamentally different designs, and this has big consequences. They scale, evolve, and compose differently because of it.
This is both technical and cultural. Being data-centric is one value / design philosophy that Unix has; another is using language-oriented composition (textual data formats and the shell).
I hope to elaborate a lot on the Oil blog, and will be interested in comments from people with Windows expertise. The SSH example I gave in another thread is interesting to me too (e.g. compare how Windows does it)
https://lobste.rs/s/wprseq/on_unix_composability#c_wjyjwq
The article’s point is valid, if a little platitudinous (yes, code reuse is good). But:
Isn’t this utterly wrong? TCP and HTTP are nowhere near universal interfaces. Large amounts of the internet goes over UDP and QUIC, and there are many many different RPC frameworks out there.
TCP and HTTP might be common for moderate-performance high-latency moderate-reliability microservices, which is a large number of services. But if we want to solve the problem, we need something more universal than just the 70% solution. I don’t know how to achieve that level of universality (though I have some ideas) but it’s the goal we need to achieve. Though we’ll probably hit it accidentally - there were many operating systems before and after Unix which failed, and they were all trying to win - merely trying isn’t enough, we need to get lucky too.
Hey, author here.
McIlroy’s quote specifically says:
Emphasis on the “a” is mine.
There is no such thing as an absolute, truly universal interface. It’s a matter of degrees. Text streams certainly weren’t universal for the longest time! We had to standardize ASCII, and later unicode, etc. We’ve since standardized HTTP and it’s certainly more universal than something like a JVM method call.
“Universal” is obviously not precisely defined, but if you’re gonna pick winners for OSI layers 4 and 7, I think the answers are pretty clearly TCP and HTTP respectively. Very little of the internet goes over UDP and QUIC at the moment.
The quote is from someone’s post above. I didn’t know this to be true, I believe TCP/HTTP are still the majority of the internet “protocols” used today.
I don’t agree it’s platitudinous because people nod their head “yes” when reading it, but when they sit down at their jobs, they code the area and not the perimeter :)
I see what’s meant by “universal”, but I also see that it’s a misleading/vague term. I would instead say that TCP and HTTP are the “lowest common denominator” or “narrow waist”, and that has important scaling properties.
A related line of research is “HTTP: An Evolvable Narrow Waist for a Future Internet” (2010).
https://www2.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-5.pdf
QUIC and HTTP/2 seem to be pushing in that direction. Basically TCP/IP was explicitly designed the narrow waist of the Internet (I traced this to Kleinrock but I’m still in the middle of the research), but it’s moving toward HTTP.
As far as I understand, QUIC is more or less a fast parallel transport specifically for HTTP. While HTTP is now the thing that supports diverse applications. For example, e-mail, IRC, NNTP are subsumed by either HTTP gateways or simply web apps. As far as I can see, most mobile apps speak HTTP these days as opposed to TCP/IP with raw sockets.
Other names: “hourglass model”, “thin waist”, and “distinguished layer”:
https://cacm.acm.org/magazines/2019/7/237714-on-the-hourglass-model/fulltext
Basically this software architecture concept spans all of networking; compilers and languages; and (distributed) operating systems. But the terminology is somewhat scattered and not everyone is talking to each other.
But again, there’s something profound here that has big practical consequences; it’s not platitudinous at all.
Sorta unrelated question: is there a reason your blog has no left margin?
Dunno, text looks to be centered on Firefox. Your UA?
Huh. Can confirm it’s centered on FF. I’m on Mozilla/5.0 (X11; Linux x86_64; rv:56.0) Gecko/20100101 Firefox/56.0 Waterfox/56.5, that is, Waterfox Classic.