1. 29
  1. 8

    In programming languages, a VM/runtime can be a narrow waist. Java, Scala, Clojure, and Kotlin all emit JVM bytecode; you can write things like libraries, profilers, package repositories, tree shakers, etc that work for all of them, but each compiler can introduce radically different semantics upstream of the compilation process, provided these semantics can be expressed in the common bytecode. The JVM is of course the most widespread example of this, but you see the same thing on .NET, BEAM, and the Lua runtime as well.

    1. 7

      This observation really clarifies why IPv6 has been so hard: it’s attempting to replace the universal thing that ensures interoperability. Even though it’s a relatively straightforward update to code, it’s not zero, and failing to have a functional updated piece of code cuts off an entire branch from being able to participate on the network - ie., it cuts off many applications, many nodes, etc.

      1. 6

        The most important point is that narrow waists are necessarily a compromise. They make systems possible and feasible, not optimal.

        Thanks for saying this explicitly. It gives me reassurance about the direction I’ve been going with my AccessKit project, which could be considered a narrow waist for GUI accessibility; it’s attempting to allow n GUI implementations to make themselves accessible (e.g. to blind users) on m platforms, using multiple programming languages (three variables!), through a single cross-platform abstraction. In some ways it will probably be less efficient than the current bespoke accessibility implementations, in the handful of projects that had sufficient resources to dedicate to accessibility, but for the users who are able to use niche applications for the first time (if I succeed in getting this adopted), it will be infinitely better than what was available before.

        1. 3

          Yes I think of this as the “lowest common denominator” problem … Just like I wrote that “Unix is equally inconvenient for everyone”, people have the same experience with protobufs. It’s nice for C++ but everyone else has problems because they’re forced into a subset of their language, for interoperability.

          But it’s possible to design “escape hatches” and they are key to evolution (it would be nice to catalog these explicilty). So yes it’s possible to do well or do badly, and the tradeoff is more worth it in some domains rather than others. In networking it’s obviously worth it – it highly relates to Metcalfe’s law, i.e. the value of the network is increased quadratically by interoperability.

          The Hacker News thread has some examples in application domains like images and animation: https://news.ycombinator.com/item?id=30483914

          Lots of people struggle with this, but it’s pervasive and essential!

          1. 1

            I almost went with Protobuf for AccessKit, but I found that it was too much of a straitjacket, especially since my schema isn’t frozen yet. For example, I didn’t want to have to assign numbers to all the fields. Protobuf also naturally doesn’t support Rust features like Option and enums containing data. So, since my current funding source is a team working on Rust projects, I decided to define the schema in Rust, then use JSON (or technically, any format supported by the Rust serde library) as my common denominator. I do wonder if giving Rust preferential treatment in this way is a mistake, but at least there’s still a lowest common denominator for everyone else.

        2. 3

          I find the parallel between IP as a narrow waist of the internet and bytes/text as the narrow waist of shell commands a bit lacking. An email client or web browser never has to concern itself with IP, as pointed out in the article, whilst Unix command line tools all drop down to the rawest of formats with no way to negotiate a higher level protocol like SMTP or HTTP.

          1. 1

            The commonality is the O(M x N) property.

            But I would say you are not limited to fiddling with raw text in shell. There are many projects that use pipelines of CSV, JSON, and HTML here:

            https://github.com/oilshell/oil/wiki/Structured-Data-in-Oil#projects

            I don’t think any of them support a Content-Type. But the point is that you could build that if you wanted to! Pipelines are unstructured but you can give them whatever structure you want.

            1. 1

              Every such project must include its own serialising/deserialising routines to pass that narrow waist, which is the equivalent of every email client and web browser implementing its own TCP/IP stack.

              1. 1

                Not really all that parsing/etc… can just be a library call away a la https://www.unix.com/man-page/freebsd/3/libxo/ as an example for cli programs on freebsd to make a way to have structured output as well as normal text output. That programmers choose to serialize/deserialize all the time in bespoke ways is on them.

                1. 1

                  Hm I don’t agree, because sockets also work on bytes. So if you have a word document, to send it across the network, you have to serialize it. Ditto for a spreadsheet, a 3D model, etc.

                  I think the more accurate analogy is if the kernel didn’t have pipes or files at all! And applications had to “fend for themselves” to interoperate.

                  Actually you have a very good example of this with iOS and Android! Have you ever seen the “Send With” or “Share” buttons in mobile apps? They ask you to share to APPLICATIONS directly – share with text message, with Google hangouts, etc. It’s an M x N explosion although I’m not sure if they implement with M x N code or some kind of plugin system (Intents or whatever?)

                  So these OSes are not data-centric. Unix is data centric.


                  I think you are suggesting that the kernel have support for structured data. There is no other way to avoid parsing and serializing.

                  However my basic point is that nobody can agree on what the narrow waist is. Every language chooses a different one.

                  This might be on the blog, but here are very long discussions here with the authors of Elvish and Rash, two alternative shells.

                  https://lobste.rs/s/ww7fw4/unix_shell_history_trivia#c_cpfoyj

                  https://lobste.rs/s/pdpjvo/google_zx_3_0_release#c_wbt31y

                  My basic viewpoint is that the shell needs support for structured data, and Oil is getting that. It will have routines to make parsing easier, so every shell script doesn’t have to parse.

                  But the structure is optional and “projected” or “backward compatible” with byte streams, via JSON, QTT (TSV upgrade), and HTML.

                  So Oil doesn’t dream of having structured data in the kernel, unlike other shells … But that seems mostly academic because I don’t think they will get it :-/ Also even if the kernel had support, it’s impossible for sockets to grow support. You must parse and serialize when using sockets. So you would end up with 2 tiers again!


                  Slogans that will appear on the blog:

                  • The lowest common denominator between a Elvish, Rash, and nushell program is a shell script (operating on byte streams). Because they don’t agree on their data model!
                    • This is not supposed to be a criticism of those shells – they could be more convenient than shell or Oil in dealing with certain types of data in certain domains. But I predict the need to bridge them is real and not theoretical and would be done with shell scripts.
                  • The lowest common denominator between a Racket, Clojure, and Common Lisp is a shell script as well (operating on byte streams). Not s-expressions! Because again they don’t agree on the model.

                  Byte streams are the model that everyone agrees on, in part because the kernel and network support it, but I think it’s more fundamental than that even.

                  It requires O(M + N) code to parse and serialize, not O(M x N).

                  Again, practically speaking, Oil will help you with this problem. It’s the only Bourne-style shell with JSON support, etc.

            2. 2

              I could use some help with this, as mentioned in the article:

              1. On the history/origin of the idea. The earliest writing I found was from 1994.
              • Another obvious question I couldn’t find an answer to is what was the first compiler that had an IR and targeted multiple ISAs!
              • i.e. some precursor to the SUIF work also from 1994, which mentions a lot of tools but only a MIPS back end https://suif.stanford.edu/suif/suif1/suif-overview/suif.html
              1. Some kind of parameterized script that could generate the narrow waist diagrams. I don’t know what the best tool is – I mentioned pikchr and asymptote in the post.
              1. 3

                I also referenced this article from the summer by @brandonbloom :)

                https://lobste.rs/s/mjo19d/unix_microservice_platforms

                1. 3

                  Another obvious question I couldn’t find an answer to is what was the first compiler that had an IR and targeted multiple ISAs!

                  I am not sure it was quite the first, but UCSD Pascal (1977) normally gets the credit for this. The front end compiled to a stack-based P-Code, the back end compiled P-Code to native code. Porting the compiler required writing a new back end. The Portable C Compiler was a couple of years later and had a similarly narrow waist but did not have as clearly defined an IR.

                  You might count System/360 (1960) as the root for a lot of these. It use a microarchitecture-agnostic ISA that was implemented as microcode on a variety of different systems and was later used as the input to static compilers for newer architectures.

                  UCSD Pascal probably also takes the blame for convincing people for 20+ years that stack-based IRs were a good idea. Some people still cling to this belief, in spite of overwhelming evidence to the contrary.

                  1. 2

                    Ah OK I thought that was more of a bytecode likes Java, but there is a fuzzy line between byte codes and internal IRs.

                    https://en.wikipedia.org/wiki/UCSD_Pascal

                    It’s definitely true that CPU designers do a lot of compiler stuff in hardware! I kind of liked the Mill project for that reason, but it seems like it ran out of steam (?) They were trying to break the narrow waist, which has huge network effects, so it’s basically impossible without millions or billions of dollars. (That is why I’m evolving shell, not breaking it.)

                    1. 3

                      I kind of liked the Mill project for that reason, but it seems like it ran out of steam (?)

                      I didn’t like the Mill as an architecture but I really liked the problem that they are trying to solve. Architectural registers are a terrible abstraction. The compiler generates an SSA representation early in the optimisation pipeline, if not in the front end directly. It then maintains an SSA form through instruction selection and right up until register allocation. It then does a lot of work turning an explicit data-flow representation into a finite-register-set representation. A modern superscalar out-of-order CPU then tries to infer an SSA representation for use in register renaming.

                      Given that you start with an SSA form and end with an SSA form, it feels like there must be something better than a small set of mutable registers, yet that seems to be the best anyone’s come up with. I had high hopes for EDGE architectures but that approach doesn’t seem to have succeeded.

                      1. 3

                        Yes I remember having this conversation a few years ago with an engineer who was really knowledgeable about microcode.

                        The compiler uses SSA, and then basically does a topological sort to emit the ISA, and then the CPU tries to invert the topological sort to recover data dependencies … it does seem very silly.

                        It reminded me that there were a whole bunch of dataflow architectures in the 80’s. I remember I came across this “dead branch” of ISA design when I was researching dataflow languages like Lucid.

                        It looks like it has a wikipedia page: https://en.wikipedia.org/wiki/Dataflow_architecture

                        I remember I asked why none of these architectures became popular, since it seems like an obvious idea, and he didn’t have a good answer.

                        I guess another thing I should write is that narrow waists can inhibit innovation ! They have so much inertia that hardware and software people hack around each side of the ISA and don’t talk to each other. This was basically the point of Hennessy and Patterson’s Turing Award lecture on “Hardware Software Co-Design”

                        I watched that a few years ago https://old.reddit.com/r/ProgrammingLanguages/comments/9kzkm7/john_hennessy_and_david_patterson_2017_acm_am/

                        1. 2

                          A few things killed dataflow architectures. The biggest was the heuristic that the original RISC papers identify: an average basic block has 7 instructions. Within a basic block, you can do explicit dataflow. Between basic blocks, you need some representation for Phi nodes: something that lets two paths to the same basic block deliver different values. This ends up looking like a set of architectural registers (at the very least, you need parameters on each block, but then you also need to thread parameters through basic blocks that don’t touch them, which can be an unbounded amount of state).

                          Once you’ve added the register-like abstraction, you discover that it’s a bottleneck for performance and so you end up needing to build register renaming. You’re then implementing two different ways of doing value forwarding, both of which burn power and both of which need to be fast.

                          I’m still sure that there must be something better than architectural registers. In a typical program, I think around 66% of values are used by precisely one instruction and a similar number have their value used only in the same basic block that it’s used.

                          There’s one big improvement that you could make for big pipelines: an explicit register kill instruction. x86 actually has this: if you xor a register with itself then the decoder points the architectural register at the canonical zero-value rename register. If you do this within a basic block then (during speculation) the CPU doesn’t need to store any intermediate values calculated within the block. In small hot loops, I’ve seen this double the total instruction throughput. Without it, the scheduler can’t tell that the value is really dead: it may be incorrectly speculating after the branch and the correct destination might read the value. With the explicit kill, any read will return zero.

                  2. 3

                    I think this is the idea of c header files. They are the narrow waist to all subroutines in a library.

                    I think the new narrow waist of apps is JSON. It is human readable, (almost) everything can generate and read it. The new narrow waist of systems is YAML. Everyone can inspect the contents without many external tools or machinery.

                    Before, XML/soap was a waist, although not very narrow, so maybe that’s why it failed. You always need some sort of library and schema to work with XML.

                    Always next to JSON/XML is REST with a narrow waist of verbs for CRUD.

                    For databases, I think that SQL can be a narrow waist, but there are too many libraries that have come into being to narrow the waist for it to actually be a narrow waist. People also tried to make a narrow waist with nosql, but it has enough tradeoffs that we’re stuck with two that aren’t narrow enough.

                    I think, for me, the slight reduction in readability of YAML/JSON is worth the tradeoff of parsability. The ability for both human and machine to parse it makes it 5× more complicated and 100× more useful. It’s part of the reason I’m so excited about all these new shells with actual structured output. Now, if only coreutils would get their act together….

                    1. 4

                      Yes JSON is definitely a narrow waist that I will mention on the blog. It’s notable because it was explicitly designed, unlike CSV! Narrow waists can be good or bad, and we should try to design good ones. Although it’s true that it does have some constraints inherited from JavaScript.

                      FWIW Oil is (slowly) getting support for structured data, doesn’t depend on “coreutils getting their act together” … Unlike other shells that are coupling the tools and the shell, in Oil they’re still decoupled. You can use all of these CSV, XML, and JSON projects

                      https://github.com/oilshell/oil/wiki/Structured-Data-in-Oil#projects

                      You can use coreutils; or you can write your own new ones. Oil has JSON support, and it will get “QTT” support (a TSV upgrade.)

                      This is related to “shells should shell out”: https://www.oilshell.org/blog/2021/01/philosophy-design.html#shells-should-shell-out

                      1. 1

                        If you’re writing a program that emits or consumes structured data and you want to maximize interoperability with other structured data producers/consumers, it’s hard to beat NL-JSON these days. Every language can consume it, and everything already supports Unix pipes. It’s the new lowest common denominator.

                        1. 1

                          Oil can support that easily because it has JSON support, but it’s not the lowest common denominator – for all of computing at least, which byte streams are, and Unix approximately is. It’s at “level 1” not “level 0”.

                          Talk to data scientists and AI researchers who use Python and R – they use CSV, and many don’t even know what JSON is (yes, really). Moreover CSV for all its flaws is actually a more natural representation.

                          Tabular data is fundamentally different than record-based data, and many people’s jobs revolve around it. (Other people’s jobs revolve more around JSON)

                          Also talk to anyone who works in documents (archiving, publishing) – they use XML a lot (to convert to PDF, etc.). It’s semi-structured with metadata and annotations.

                          JSON is definitely more popular than it used to be, but it’s not universal.


                          I am very familiar with this; I “invented” ND-JSON independently in 2006, and used it for work.

                          https://code.google.com/archive/p/chutils/source/default/source?page=1

                          I might dig this up for the blog.

                          I have the source somewhere on my hard drive, but one thing I did was query LDAP for employees, and turn it into JSON. And then query some other sources like source control, and turn that into JSON, and join, etc.

                          It’s useful but it’s not universal! The funny thing is that I didn’t know shell very well at the time, and I wanted to avoid using sed, etc. But over time I just learned shell and learned how to compose it with Python, and that solved my problems very effectively. Over 10+ years there were only a few problems where I really needed ND-JSON. That could obviously be different for other people with different problems.

                          Often the pipeline model itself just breaks down before the structure becomes an issue.

                          JSON was a few years old when I tried this. And the funny thing is that I showed this to Guido van Rossum, who was unaware of JSON, and he said “that’s just Python” (which is very close to true). AFAIR this is also before JSON was in the Python stdlib.

                          He also thought the JSON over pipes idea was cool, but like I said, I found that it didn’t apply to a lot of problems. (i.e. it basically it wasn’t worth the effort and you could solve the problem another way.) I wanted it to be “universal” but it wasn’t.

                          1. 2

                            I’m in data journalism, and it’s true that CSV is the lingua franca there.

                      2. 3

                        Before, XML/soap was a waist

                        I admit that when I first read that with my screen reader, I initially heard it as “XML/SOAP was a waste”. Yeah, I guess that’s a cheap shot.

                    2. 1

                      Some tries to trim the “top” part of the hourglass too with projects like https://en.m.wikipedia.org/wiki/Recursive_Internetwork_Architecture

                      Or it’s, imho, more advanced and mature iteration (while a bit divergent) https://ouroboros.rocks/docs/concepts/problem_osi/