“people think programming in Excel is programming”
I mean, it is Turing-complete with =LAMBDA. I find it a bit distressing when programmers, especially influential ones, try to denigrate an environment or language they don’t like as “not real programming”. This reminded me of an article on contempt culture.
there is no way to have a flexible innovative system and serve the Posix elephant.
IBM i, which actually predates POSIX by some amount, is somewhat popular in my circles as an example of “what could have been” regarding CLIs, alternative programming paradigms, etc. It has a functional POSIX layer via AIX emulation (named PASE).
DOS and OS/2 had EMX which provided most of POSIX atop them. Mac OS 8/9 had GUSI for pthreads atop the horror show known as Multiprocessing Services. I’m pretty sure the Amiga had a POSIX layer. Stratus VOS. INTEGRITY. There are plenty of non-traditional, non-Unix platforms that are – at least mostly – POSIX conformant.
What I’m saying is there is absolutely no technological reason you couldn’t slap a POSIX layer atop virtually anything, even if it wasn’t originally designed for it. Hell, I would even suggest you could go all-out and design this “flexible innovative system” and have someone else put a POSIX layer atop it. You inherit half the world’s software ecosystem for “free” with good enough emulation, and your native apps will run better and show the world why they should develop for that platform instead of against POSIX, right?
But then, even Windows is giving up and making WSL2 a first-class citizen. This isn’t because of some weird conspiracy to make all platforms POSIX. It is because the POSIX paradigm has evolved, admittedly slowly in some cases, to provide a “good enough” layer on which you can build different platforms.
And abandoning POSIX could also lead to a bunch of corporations making locked-in systems that are not interoperable. Let’s not forget the origins of X/Open and why this whole thing exists…
APIs for managing threads and access to shared memory should be re-thought with defaults created for many-core systems
Apple released libdispatch in 2011 with Snow Leopard under an Apache 2.0 license. It supports Mac OS, the BSDs, Linux, Solaris, and since 2017, Windows (using NT native blocks, even). I actually wrote an iOS app using GCD to process large XML API responses and found it did exactly what it was supposed to: on devices with more cores, more requests could be processed at once, making the system more responsive. At the same time, at least the UI thread didn’t lock up when your single-core 3GS was still churning through.
And yet nobody uses libdispatch. Sometimes I hear “ew, Apple”, which may have been a bigger influence in 2011. Now, there’s really no excuse. I think it’s just inertia. And nobody wants to introduce more dependencies when you’re guaranteed POSIX and it works “good enough”.
create systems that express in software the richness of modern hardware
I think it should be the exact opposite. Software shouldn’t care about the hardware it is running on. It could be running on a Raspberry Pi Zero, or a z16. The reason POSIX has endured for this long is because it gives everyone a base platform to build more rich frameworks atop. Libraries like libdispatch are a good example of what can be built to take advantage of different scales of hardware without abandoning the thing that ensures we have an open standard that all systems are “guaranteed” to (mostly) follow.
I might use this comment as the basis for an article on my own, and go into more detail about what I think POSIX gets right and wrong, and directions it could/should head.
I might use this comment as the basis for an article on my own, and go into more detail about what I think POSIX gets right and wrong, and directions it could/should head.
Relatedly, there is a misconception that has been around for years that Haiku, which I am one of the developers of, is “not a UNIX” or “only has POSIX compatibility non-‘natively’”. When this is corrected, some people are more than a little dismayed; they thought of Haiku as being “different” and “exotic” and are sad to discover that, under the hood, it’s less so than they imagined! (Or, often, it still is quite different and exotic; it’s just that “POSIX” means a whole lot less than most people may come to assume from Linux and the BSDs.)
The review of Haiku’s latest release in The Register originally included this misconception, and I wound up in an extended argument (note especially the reply down-thread which talks about feelings) with the author of the article about it (and also in an exchange with the publication itself on Twitter.)
Relatedly, there is a misconception that has been around for years that Haiku, which I am one of the developers of, is “not a UNIX”
Isn’t that true? It’s not a descendent of BSD or SysV, nor has it ever been certified as a UNIX. If someone called Haiku a UNIX then they’d have to say the same about Linux, which would be clearly off. Even Windows NT4 was POSIX-compliant and I’ve never met anyone who considers Windows to be a UNIX variant.
The review of Haiku’s latest release in The Register originally included this misconception, and I wound up in an extended argument (note especially the reply down-thread which talks about feelings) with the author of the article about it
Hah, I had a similar (though briefer) exchange with the same author at https://news.ycombinator.com/item?id=34772982. I think that particular person just doesn’t have much interest in getting terminology correct before rushing their articles out the door.
This may come as an unpleasant revelation, but sometimes, just saying to someone “that isn’t right” is not going to change their mind. You didn’t even bother to reply to my comment on HN, so how you can call that an “exchange” puzzles me. You posted a negative critical comment, I replied, and you didn’t.
Ah well. Your choice.
No, I do not “just rush stuff out”, and in fact, I care a very great deal about terminology. I’ve been a professional writer for 28 years, have written for some 15 magazines and sites in a paid capacity, and have been a professional editor as well. It is not possible to keep working in such a business for so long if you are slapdash or slipshod about it.
As for the technical stuff here:
I disagree with @waddlesplash on this, and I disagree with you as well.
I stand by my position on BeOS and Haiku: no, they are not Unixes, nor even especially Unix-like in their design. However, Haiku has a high degree of Unix compatibility – as does Windows, and it’s not a Unix either. OpenVMS and IBM z/OS also have high degrees of Unix compatibilty, and both have historically passed POSIX testing, meaning that they could, if they wished, brand as being “a UNIX”.
Which is where my disagreement with your comment here comes in.
Linux has passed the testing and as such it is a UNIX. Like it or not, it has won Open Group branding, and although none of the 2-3 vendors who’ve had it in the past still pay for the trademark, it did pass the test and thus it counts.
No direct derivative of AT&T UNIX is still in active development any more.
No BSD has ever sought the branding, but I am sure they easily could pass the test if they so wished. It would however be a waste of money.
I would characterise Haiku the same as I would OpenVMS, z/OS and Windows NT: (via its native POSIX personality) a non-Unix-like OS, which does not resemble traditional Unix in design, in implementation, in its filesystem design or layout, or in its native APIs. However, all of them are highly UNIX compatible – about as UNIX compatible as it’s possible to be without actually being one. OpenVMS even used to have its own native X11 server, although I don’t think it’s maintained any more. Haiku, like RISC OS, has its own compatibility library allowing X11 apps to run and display in the native GUI without running a full X server.
Linux is a UNIX-like design, implemented in the same language, conforming to the same spec, implementing the same APIs. Unlike Haiku, z/OS or OpenVMS, it has no other alternative native APIs or non-UNIX-like filesystems or anything else.
Linux is a UNIX. By the current strict technical definition: it passed the Open Group tests which subsumed and replaced POSIX decades ago. And by a description: it’s a UNIX-like design built with Unix tools in the Unix preferred language, and nothing else.
Haiku isn’t. It hasn’t passed testing, it isn’t Unix like in design, or implementation, or native APIs, or native functionality.
The one that is arguable, to me, is none of the above.
It’s macOS.
macOS has a non-Unix-like kernel, derived from Mach, but with a big in-kernel UNIX server derived from BSD code. It has its own native non-Unix-like APIs, but they mostly sit on top of a UNIX-derived and highly UNIX-like layer. It has its own native GUI, which is non-UNIX-like, and its own native configuration database and much else, which are non-UNIX-like and implemented in non-UNIX-like languages.
It doesn’t even have a case-sensitive filesystem, one of the lowest common denominators of Unix-like OSes.
But, aside from its kernel, it’s highly UNIX-like until you get up to the filesystem layout and the GUI layer – all the UNIX directories are there, just mostly empty, or populated with stubs pointing the curious explorer to Netinfo and so on.
For X11 apps, it does in fact run a whole X server based on X.org.
But macOS has passed testing and Apple does pay for the trademark so, by the strict technical definition, it 100% is a UNIX™.
If someone called Haiku a UNIX then they’d have to say the same about Linux, which would be clearly off.
Well, there are people who say it about Linux. After all, POSIX is part of the “single UNIX specification”, so it is somewhat reasonable. But if people want to be consistent and not use the term for either Linux or Haiku, that’s fine by me. It’s using the term for only one and not both that I object to as inconsistent.
libdispatch is kind of an ironic example. The APIs lends their implementations to heap allocations at every corner and thread explosion. Most of them could be addressed with intrusive memory and enforced asynchronous behavior at the API boundary.
It’s like POSIX in a sense where it’s “good enough” for taking some advantage of various hardware configurations but doesn’t quite meet expectations on scalability or feature set for some applications. POSIX apis like pthread and select/poll, under this lens, also take advantage of hardware and are “good enough”.
If that’s all that is required by the application then it’s fine, but lower/core components like schedulers, databases, runtimes, and those which provide the abstractions that people use over POSIX apis generally want to do as best they can. Only offering POSIX at the OS level limits this and I believe is why things like io_uring on linux, ulock on darwin, and even epoll/kqueue on both exists.
Now these core components either try (pretty hard) to design APIs that work well across all of these extensions (including, and limiting-ly so, POSIX) or they just specialize to a specific platform. It’s too late the change now, but there’s more scalable API decisions for memory, IO and synchronization that POSIX could have adopted that could be built on-top of older POSIX apis, surprisingly looking to windows ntdll here for inspiration.
What I’m saying is there is absolutely no technological reason you couldn’t slap a POSIX layer atop virtually anything, even if it wasn’t originally designed for it. Hell, I would even suggest you could go all-out and design this “flexible innovative system” and have someone else put a POSIX layer atop it.
Well there’s at least one, and the article starts into this a little bit: That POSIX layer you’re talking about takes up space and CPU, so if you’re designing a small system (or even a “big” one optimised for cost or power efficiency) you might like to have that on the negotiating table.
I heard a story about a chap who sold forth chips and every time he tried to break out they would ask for a POSIX demo. They eventually made one, and of course it was slow and made everything warm, so it didn’t help. Now if you know forth, this makes sense, but if you don’t know forth – and heck, clearly management didn’t either – you might not understand why you can’t have your cake and eat it too, so “slapping a POSIX layer atop” might even make sense. But forth systems are really different, really ideal if you can break your problem down into a bunch of little state machines, but it’s hard to sell that to someone whose problem is buying software.
Years later, I worked for a company who sold databases, and a frequent complaint voiced by the market, at trade shows and in the press, was that they didn’t have an SQL layer, so they made one, but it really just handled the ODBC and some basic syntactic differences, like maybe it was brely SQL92 if you squinted, so the complaint continued to be heard in the market and the company made another SQL layer. When I joined they were starting the fourth or fifth version, and I’m like, this is just like the forth systems!
But then, even Windows is giving up and making WSL2 a first-class citizen. This isn’t because of some weird conspiracy to make all platforms POSIX. It is because the POSIX paradigm has evolved
This might be more to do with the value of Linux as opposed to POSIX. For many developers (maybe even most), Linux is hands-down the best development environment you can have, even if your target is Windows or Mac or tiny forth chips, and I don’t think it’s because of POSIX, or really any one thing, but I do think if something else had been better, Microsoft probably would have used that instead (or in addition-to: look at how they’re treating the web platform with edge!)
That being said, I think POSIX was an important part of why Linux is successful: Once upon a time Linux was a pretty goofy system, and at that time a lot of patches were simply justified as compliance with POSIX, which rapidly expanded the suite of software Linux had access to. Having access to a pretty-good spec and standard meant people who ported programs to early-Linux fixed those problems in the right place (the kernel and/or libc) instead of adding another #ifdef __linux__
That POSIX layer you’re talking about takes up space and CPU, so if you’re designing a small system (or even a “big” one optimised for cost or power efficiency) you might like to have that on the negotiating table.
I can appreciate that. I focused on that because the article spent so much time waxing poetic about how it’s “hard” to find a computer with less than “tens of CPUs”. At that scale, it would be equally “hard” to justify not having a POSIX layer.
A chip designed to run Forth would be quite an interesting system! I don’t know if I’ve ever heard about one. I know of LispMs, and some of the specialised hardware to accelerate FORTRAN once upon a time.
they didn’t have an SQL layer, so they made one
You can make an SQL layer atop pretty much any database, even non-relational ones, if you squint hard enough. I suppose it’s the same thing with POSIX layers. Not always the best idea, but the standards are generous enough in their allowances that it can be done.
POSIX was an important part of why Linux is successful
Yes. In the early days, it gained it a lot of software with little amount of porting. Now, it makes it easy to port workloads off other Unix platforms (like Solaris). In the future, it might just be the way that Next-New-OS bridges to bring Linux workloads to it.
snej says he can’t find anything to argue with. Neither can I, but I also can’t find anything to agree with.
Is Posix good? I don’t know. Is it bad? I don’t know. And this article doesn’t say anything about what a non-Posix OS would look like. It just sells us a dream that they’d be better.
Link on Roscoe’s thoughts https://www.usenix.org/conference/atc21/presentation/fri-keynote - one of the takeaways is that the “OS controls everything” mental model is not really applicable to modern hardware with 3rd party firmware controlling substantial bits of the system, with opaque memory spaces.
Most attempts to handle the plumbing problem have followed the age-old software paradigm of adding yet another abstraction. Hiding the plumbing is good, but once your toilet overflows for the 10th time, maybe it’s time to change the plumbing rather than buy a longer snake.
Brilliant metaphor. I can’t find anything to argue with in this article. The author does at one point say “the only two system APIs in use now are POSIX and Windows”, which is wrong, but later mentions embedded programming and the variety of APIs found there. In my ESP32 foray I found it interesting to explore a world with very different APIs for files and networking.
There is still a very meaningful difference between even the fastest SSDs and RAM. SSDs are not byte-addressable, they have limited wear cycles, and they are still hundreds of times slower than RAM. While we’re not waiting thousands of clock cycles anymore for data from a spinning disk, it still makes sense to distinguish the two in our APIs.
The pthreads API (part of Posix) remains the most common way for programmers to write code that can take advantage of current multi-core designs — an API so hard to use that papers have been written about how no programmer should ever try to use it.
Don’t use pthreads. Okay. I’ll split this into multiple processes. Now I just need to pass data between the two using a file descriptor. d’OH! Right back to plumbing code.
More seriously, I do wonder what we would get from considering what standard data manipulations should be provided by an operating system. e.g. A common API across operating systems for reading table data from files seems usefully at the unit level many applications are programmed today.
While sqlite fills that role in one particular way, I’m suggesting a specific enumeration akin to POSIX whereby operating systems agree to accelerate application development by intentionally avoiding differentiation in specific areas. I am also suggesting a common API for multiple table formats. That might be a common C API shared among operating systems but a modern analog of POSIX might benefit from expanding the potential role played by the operating system. e.g. there could be an additional standard input akin to stdin that would allow an operating system service to stream a specific tabular data structure into programs. Think nushell-adjacent but something we could assume is available everywhere our applications might be deployed.
While I can agree that record-level APIs have been provided by operating systems in the past, I wouldn’t draw the same conclusion re: preferences today. It’s notable that record-level APIs haven’t actually been provided by a modern operating system in this century. Due to the growth of the industry, most engineers have no experience using them and thus haven’t made a particular choice. They were quite popular on the platforms that provided them. Those platforms mostly disappeared due to unrelated choices by the vendors.
Previous efforts at record-level APIs are also significantly different from what I’m suggesting here. They were attempts by vendors at providing differentiation for their products. While vendors did share some common conventions and technologies, the interfaces for using them and the availability differed not just among vendors but among different products in from the same vendor. I’m proposing this in the spirit of POSIX, whereby vendors realized that differentiation was limiting the overall growth potential of the industry and agreed to limit differentiation in some areas.
Table data was just one example. I chose it because it relates to the question posed in the article. Container-related APIs would be another excellent candidate. Rather than having Docker Engine ship a VM running another OS, it would be great if Apple, Microsoft, and linux vendors got together to define a baseline set of APIs related to namespacing and resource allocation that they can all agree to support long-term.
Those platforms mostly disappeared due to unrelated choices by the vendors.
I don’t necessarily disagree but I think it is telling that NT chose not provide this kind of API at such a deep level. Instead you have options like ESE layered over the stream of bytes.
I’m proposing this in the spirit of POSIX, whereby vendors realized that differentiation was limiting the overall growth potential of the industry and agreed to limit differentiation in some areas.
There’s not a named standard, but this has effectively already happened IMO, if not in the way you might prefer, everybody has limited themselves to directories of files that are byte streams, even say MacOS by removing the resource fork. My understanding of POSIX is that is mostly a synthesis of existing implementation choices. As you rightly point out no commonly in use OS has table based IO to constrain or unify.
Developers are happy to ship sqlite (already mentioned) with their programs, it doesn’t need to be part of the OS as such, the bag of bytes is “good enough”. It would be handy if there was some table oriented common API to target, but not as handy as maybe the “POSIX of GUIs (Windows vs. Mac vs. X vs. Wayland)” or “POSIX of Rio/io_uring/kqueue/epoll” IMO.
it would be great if Apple, Microsoft, and linux vendors got together to define a baseline set of APIs related to namespacing and resource allocation that they can all agree to support long-term.
Given that those three don’t even necessarily have the same resources or measure them in the same ways, I am skeptical that this is a feasible, at least not without several rounds of lower level de-differentiation happening first.
I will say, that:
I’m proposing this in the spirit of POSIX, whereby vendors realized that differentiation was limiting the overall growth potential of the industry and agreed to limit differentiation in some areas.
can still make sense as an idea, I think opentelemetry is a good recent example.
I mean, it is Turing-complete with =LAMBDA. I find it a bit distressing when programmers, especially influential ones, try to denigrate an environment or language they don’t like as “not real programming”. This reminded me of an article on contempt culture.
IBM i, which actually predates POSIX by some amount, is somewhat popular in my circles as an example of “what could have been” regarding CLIs, alternative programming paradigms, etc. It has a functional POSIX layer via AIX emulation (named PASE).
DOS and OS/2 had EMX which provided most of POSIX atop them. Mac OS 8/9 had GUSI for pthreads atop the horror show known as Multiprocessing Services. I’m pretty sure the Amiga had a POSIX layer. Stratus VOS. INTEGRITY. There are plenty of non-traditional, non-Unix platforms that are – at least mostly – POSIX conformant.
What I’m saying is there is absolutely no technological reason you couldn’t slap a POSIX layer atop virtually anything, even if it wasn’t originally designed for it. Hell, I would even suggest you could go all-out and design this “flexible innovative system” and have someone else put a POSIX layer atop it. You inherit half the world’s software ecosystem for “free” with good enough emulation, and your native apps will run better and show the world why they should develop for that platform instead of against POSIX, right?
But then, even Windows is giving up and making WSL2 a first-class citizen. This isn’t because of some weird conspiracy to make all platforms POSIX. It is because the POSIX paradigm has evolved, admittedly slowly in some cases, to provide a “good enough” layer on which you can build different platforms.
And abandoning POSIX could also lead to a bunch of corporations making locked-in systems that are not interoperable. Let’s not forget the origins of X/Open and why this whole thing exists…
Apple released libdispatch in 2011 with Snow Leopard under an Apache 2.0 license. It supports Mac OS, the BSDs, Linux, Solaris, and since 2017, Windows (using NT native blocks, even). I actually wrote an iOS app using GCD to process large XML API responses and found it did exactly what it was supposed to: on devices with more cores, more requests could be processed at once, making the system more responsive. At the same time, at least the UI thread didn’t lock up when your single-core 3GS was still churning through.
And yet nobody uses libdispatch. Sometimes I hear “ew, Apple”, which may have been a bigger influence in 2011. Now, there’s really no excuse. I think it’s just inertia. And nobody wants to introduce more dependencies when you’re guaranteed POSIX and it works “good enough”.
I think it should be the exact opposite. Software shouldn’t care about the hardware it is running on. It could be running on a Raspberry Pi Zero, or a z16. The reason POSIX has endured for this long is because it gives everyone a base platform to build more rich frameworks atop. Libraries like libdispatch are a good example of what can be built to take advantage of different scales of hardware without abandoning the thing that ensures we have an open standard that all systems are “guaranteed” to (mostly) follow.
I might use this comment as the basis for an article on my own, and go into more detail about what I think POSIX gets right and wrong, and directions it could/should head.
I’d love to read that!
I agree with pretty much all of this.
Relatedly, there is a misconception that has been around for years that Haiku, which I am one of the developers of, is “not a UNIX” or “only has POSIX compatibility non-‘natively’”. When this is corrected, some people are more than a little dismayed; they thought of Haiku as being “different” and “exotic” and are sad to discover that, under the hood, it’s less so than they imagined! (Or, often, it still is quite different and exotic; it’s just that “POSIX” means a whole lot less than most people may come to assume from Linux and the BSDs.)
The review of Haiku’s latest release in The Register originally included this misconception, and I wound up in an extended argument (note especially the reply down-thread which talks about feelings) with the author of the article about it (and also in an exchange with the publication itself on Twitter.)
Isn’t that true? It’s not a descendent of BSD or SysV, nor has it ever been certified as a UNIX. If someone called Haiku a UNIX then they’d have to say the same about Linux, which would be clearly off. Even Windows NT4 was POSIX-compliant and I’ve never met anyone who considers Windows to be a UNIX variant.
Hah, I had a similar (though briefer) exchange with the same author at https://news.ycombinator.com/item?id=34772982. I think that particular person just doesn’t have much interest in getting terminology correct before rushing their articles out the door.
As I said on HN:
Gee, thanks.
This may come as an unpleasant revelation, but sometimes, just saying to someone “that isn’t right” is not going to change their mind. You didn’t even bother to reply to my comment on HN, so how you can call that an “exchange” puzzles me. You posted a negative critical comment, I replied, and you didn’t.
Ah well. Your choice.
No, I do not “just rush stuff out”, and in fact, I care a very great deal about terminology. I’ve been a professional writer for 28 years, have written for some 15 magazines and sites in a paid capacity, and have been a professional editor as well. It is not possible to keep working in such a business for so long if you are slapdash or slipshod about it.
As for the technical stuff here:
I disagree with @waddlesplash on this, and I disagree with you as well.
I stand by my position on BeOS and Haiku: no, they are not Unixes, nor even especially Unix-like in their design. However, Haiku has a high degree of Unix compatibility – as does Windows, and it’s not a Unix either. OpenVMS and IBM z/OS also have high degrees of Unix compatibilty, and both have historically passed POSIX testing, meaning that they could, if they wished, brand as being “a UNIX”.
Which is where my disagreement with your comment here comes in.
Linux has passed the testing and as such it is a UNIX. Like it or not, it has won Open Group branding, and although none of the 2-3 vendors who’ve had it in the past still pay for the trademark, it did pass the test and thus it counts.
No direct derivative of AT&T UNIX is still in active development any more.
No BSD has ever sought the branding, but I am sure they easily could pass the test if they so wished. It would however be a waste of money.
I would characterise Haiku the same as I would OpenVMS, z/OS and Windows NT: (via its native POSIX personality) a non-Unix-like OS, which does not resemble traditional Unix in design, in implementation, in its filesystem design or layout, or in its native APIs. However, all of them are highly UNIX compatible – about as UNIX compatible as it’s possible to be without actually being one. OpenVMS even used to have its own native X11 server, although I don’t think it’s maintained any more. Haiku, like RISC OS, has its own compatibility library allowing X11 apps to run and display in the native GUI without running a full X server.
Linux is a UNIX-like design, implemented in the same language, conforming to the same spec, implementing the same APIs. Unlike Haiku, z/OS or OpenVMS, it has no other alternative native APIs or non-UNIX-like filesystems or anything else.
Linux is a UNIX. By the current strict technical definition: it passed the Open Group tests which subsumed and replaced POSIX decades ago. And by a description: it’s a UNIX-like design built with Unix tools in the Unix preferred language, and nothing else.
Haiku isn’t. It hasn’t passed testing, it isn’t Unix like in design, or implementation, or native APIs, or native functionality.
The one that is arguable, to me, is none of the above.
It’s macOS.
macOS has a non-Unix-like kernel, derived from Mach, but with a big in-kernel UNIX server derived from BSD code. It has its own native non-Unix-like APIs, but they mostly sit on top of a UNIX-derived and highly UNIX-like layer. It has its own native GUI, which is non-UNIX-like, and its own native configuration database and much else, which are non-UNIX-like and implemented in non-UNIX-like languages.
It doesn’t even have a case-sensitive filesystem, one of the lowest common denominators of Unix-like OSes.
But, aside from its kernel, it’s highly UNIX-like until you get up to the filesystem layout and the GUI layer – all the UNIX directories are there, just mostly empty, or populated with stubs pointing the curious explorer to Netinfo and so on.
For X11 apps, it does in fact run a whole X server based on X.org.
But macOS has passed testing and Apple does pay for the trademark so, by the strict technical definition, it 100% is a UNIX™.
Well, there are people who say it about Linux. After all, POSIX is part of the “single UNIX specification”, so it is somewhat reasonable. But if people want to be consistent and not use the term for either Linux or Haiku, that’s fine by me. It’s using the term for only one and not both that I object to as inconsistent.
[Comment removed by author]
libdispatch is kind of an ironic example. The APIs lends their implementations to heap allocations at every corner and thread explosion. Most of them could be addressed with intrusive memory and enforced asynchronous behavior at the API boundary.
It’s like POSIX in a sense where it’s “good enough” for taking some advantage of various hardware configurations but doesn’t quite meet expectations on scalability or feature set for some applications. POSIX apis like pthread and select/poll, under this lens, also take advantage of hardware and are “good enough”.
If that’s all that is required by the application then it’s fine, but lower/core components like schedulers, databases, runtimes, and those which provide the abstractions that people use over POSIX apis generally want to do as best they can. Only offering POSIX at the OS level limits this and I believe is why things like
io_uring
on linux,ulock
on darwin, and evenepoll/kqueue
on both exists.Now these core components either try (pretty hard) to design APIs that work well across all of these extensions (including, and limiting-ly so, POSIX) or they just specialize to a specific platform. It’s too late the change now, but there’s more scalable API decisions for memory, IO and synchronization that POSIX could have adopted that could be built on-top of older POSIX apis, surprisingly looking to windows ntdll here for inspiration.
Well there’s at least one, and the article starts into this a little bit: That POSIX layer you’re talking about takes up space and CPU, so if you’re designing a small system (or even a “big” one optimised for cost or power efficiency) you might like to have that on the negotiating table.
I heard a story about a chap who sold forth chips and every time he tried to break out they would ask for a POSIX demo. They eventually made one, and of course it was slow and made everything warm, so it didn’t help. Now if you know forth, this makes sense, but if you don’t know forth – and heck, clearly management didn’t either – you might not understand why you can’t have your cake and eat it too, so “slapping a POSIX layer atop” might even make sense. But forth systems are really different, really ideal if you can break your problem down into a bunch of little state machines, but it’s hard to sell that to someone whose problem is buying software.
Years later, I worked for a company who sold databases, and a frequent complaint voiced by the market, at trade shows and in the press, was that they didn’t have an SQL layer, so they made one, but it really just handled the ODBC and some basic syntactic differences, like maybe it was brely SQL92 if you squinted, so the complaint continued to be heard in the market and the company made another SQL layer. When I joined they were starting the fourth or fifth version, and I’m like, this is just like the forth systems!
This might be more to do with the value of Linux as opposed to POSIX. For many developers (maybe even most), Linux is hands-down the best development environment you can have, even if your target is Windows or Mac or tiny forth chips, and I don’t think it’s because of POSIX, or really any one thing, but I do think if something else had been better, Microsoft probably would have used that instead (or in addition-to: look at how they’re treating the web platform with edge!)
That being said, I think POSIX was an important part of why Linux is successful: Once upon a time Linux was a pretty goofy system, and at that time a lot of patches were simply justified as compliance with POSIX, which rapidly expanded the suite of software Linux had access to. Having access to a pretty-good spec and standard meant people who ported programs to early-Linux fixed those problems in the right place (the kernel and/or libc) instead of adding another
#ifdef __linux__
I can appreciate that. I focused on that because the article spent so much time waxing poetic about how it’s “hard” to find a computer with less than “tens of CPUs”. At that scale, it would be equally “hard” to justify not having a POSIX layer.
A chip designed to run Forth would be quite an interesting system! I don’t know if I’ve ever heard about one. I know of LispMs, and some of the specialised hardware to accelerate FORTRAN once upon a time.
You can make an SQL layer atop pretty much any database, even non-relational ones, if you squint hard enough. I suppose it’s the same thing with POSIX layers. Not always the best idea, but the standards are generous enough in their allowances that it can be done.
Yes. In the early days, it gained it a lot of software with little amount of porting. Now, it makes it easy to port workloads off other Unix platforms (like Solaris). In the future, it might just be the way that Next-New-OS bridges to bring Linux workloads to it.
These guys make forth chips, 144 “cpus” to a die, which is great for some applications, but POSIX is much too big to fit on even one of those chips.
Quite possibly we are seeing that right now with the “containerisation” fetish.
first of all, programming in Excel is programming
secondly:
it is not. the point of a program is to accomplish a task. “transmuting data between states” has no meaning if it’s not in service of something larger
snej says he can’t find anything to argue with. Neither can I, but I also can’t find anything to agree with.
Is Posix good? I don’t know. Is it bad? I don’t know. And this article doesn’t say anything about what a non-Posix OS would look like. It just sells us a dream that they’d be better.
I’m a bit surprised George didn’t mention Timothy Roscoe here. He has a much better rant on this subject than PHK.
Link on Roscoe’s thoughts https://www.usenix.org/conference/atc21/presentation/fri-keynote - one of the takeaways is that the “OS controls everything” mental model is not really applicable to modern hardware with 3rd party firmware controlling substantial bits of the system, with opaque memory spaces.
Brilliant metaphor. I can’t find anything to argue with in this article. The author does at one point say “the only two system APIs in use now are POSIX and Windows”, which is wrong, but later mentions embedded programming and the variety of APIs found there. In my ESP32 foray I found it interesting to explore a world with very different APIs for files and networking.
There is still a very meaningful difference between even the fastest SSDs and RAM. SSDs are not byte-addressable, they have limited wear cycles, and they are still hundreds of times slower than RAM. While we’re not waiting thousands of clock cycles anymore for data from a spinning disk, it still makes sense to distinguish the two in our APIs.
Don’t use pthreads. Okay. I’ll split this into multiple processes. Now I just need to pass data between the two using a file descriptor. d’OH! Right back to plumbing code.
More seriously, I do wonder what we would get from considering what standard data manipulations should be provided by an operating system. e.g. A common API across operating systems for reading table data from files seems usefully at the unit level many applications are programmed today.
Isn’t this just sqlite?
While sqlite fills that role in one particular way, I’m suggesting a specific enumeration akin to POSIX whereby operating systems agree to accelerate application development by intentionally avoiding differentiation in specific areas. I am also suggesting a common API for multiple table formats. That might be a common C API shared among operating systems but a modern analog of POSIX might benefit from expanding the potential role played by the operating system. e.g. there could be an additional standard input akin to
stdin
that would allow an operating system service to stream a specific tabular data structure into programs. Think nushell-adjacent but something we could assume is available everywhere our applications might be deployed.This was tried multiple times, eg RMS, the majority of people really seem to prefer files as the finest granularity of structure at the OS level.
While I can agree that record-level APIs have been provided by operating systems in the past, I wouldn’t draw the same conclusion re: preferences today. It’s notable that record-level APIs haven’t actually been provided by a modern operating system in this century. Due to the growth of the industry, most engineers have no experience using them and thus haven’t made a particular choice. They were quite popular on the platforms that provided them. Those platforms mostly disappeared due to unrelated choices by the vendors.
Previous efforts at record-level APIs are also significantly different from what I’m suggesting here. They were attempts by vendors at providing differentiation for their products. While vendors did share some common conventions and technologies, the interfaces for using them and the availability differed not just among vendors but among different products in from the same vendor. I’m proposing this in the spirit of POSIX, whereby vendors realized that differentiation was limiting the overall growth potential of the industry and agreed to limit differentiation in some areas.
Table data was just one example. I chose it because it relates to the question posed in the article. Container-related APIs would be another excellent candidate. Rather than having Docker Engine ship a VM running another OS, it would be great if Apple, Microsoft, and linux vendors got together to define a baseline set of APIs related to namespacing and resource allocation that they can all agree to support long-term.
I don’t necessarily disagree but I think it is telling that NT chose not provide this kind of API at such a deep level. Instead you have options like ESE layered over the stream of bytes.
There’s not a named standard, but this has effectively already happened IMO, if not in the way you might prefer, everybody has limited themselves to directories of files that are byte streams, even say MacOS by removing the resource fork. My understanding of POSIX is that is mostly a synthesis of existing implementation choices. As you rightly point out no commonly in use OS has table based IO to constrain or unify.
Developers are happy to ship sqlite (already mentioned) with their programs, it doesn’t need to be part of the OS as such, the bag of bytes is “good enough”. It would be handy if there was some table oriented common API to target, but not as handy as maybe the “POSIX of GUIs (Windows vs. Mac vs. X vs. Wayland)” or “POSIX of Rio/io_uring/kqueue/epoll” IMO.
Given that those three don’t even necessarily have the same resources or measure them in the same ways, I am skeptical that this is a feasible, at least not without several rounds of lower level de-differentiation happening first.
I will say, that:
can still make sense as an idea, I think opentelemetry is a good recent example.