1. 68
    1. 14

      I like it.

      To make it run fast, I’d implement all of the abstractions efficiently, then make a Linux FUSE file system for viewing Hull data as a file system.

      1. 3

        I’ve heard criticism of UNIX but the idea of the filesystem as a universal perspective on the computer and its services/programs is genius. This shows that it is still a fertile ground, even after all this time.

        1. 6

          The NLS/Augment system that was famously demoed in 1968 was based on similar ideas.

          All of the data in NLS was organized into a single hierarchical namespace. This included hierarchically structured documents: each section, paragraph and item in a bulleted list was a separate indexable node in the tree. This included hierarchical structured source code: each statement was a separate indexable node in the tree. The namespace wasn’t local to a particular “workstation”, it was shared by all users in the NLS system, and you could collaboratively edit documents. Each user sat in front of their own terminal, with multiple tiled windows and a mouse, and could open windows on shared documents and edit them simultaneously with other users. This was also the first hypertext system, so pathnames also served as URLs. The NLS system was one of the initial 2 computers that were connected in the very first internet connection, a year later.

          I feel that the Unix file system namespace is just a shadow of this idea. Too bad the NLS project went off the rails, or we could have had the world wide web in the 1970s.

        2. 2

          Afaict the real problem is the opposite: filesystems are a subset of the richer functionality you can get with more general purpose namespace that isn’t necessarily designed to live on a disk.

      2. 2

        Conversely one could create an introspection system for something like Ruby that displays all it’s variables like this via a FUSE FS.

    2. 10

      This is a fun Unix-y idea, but even on tmpfs (which uses memory rather than disk/SSD), I suspect it will be around 2 orders of magnitude slower than a “normal” shell, and that does matter.

      It looks like it turns every variable access into a file system access, which is at minimum an extra context switch. If you use FUSE, I believe it’s two context switches.

      A variable access is basically a string hash lookup in most interpreters, including most shells, and that’s pretty fast. I suspect it takes 10-100 nanoseconds, i.e. you can do 10-100M / second.

      https://computers-are-fast.github.io/ (first example is clocked at 68 M /s)

      On the other hand a context switch is single digit microseconds, which is 2 orders of magnitude slower:


      Well I already have experience with a shell that’s 2 orders of magnitude too slow! That’s Oil, which I’m working on speeding up right now :) http://www.oilshell.org/blog/2019/02/05.html

      Some people thought writing a shell in Python was a bad idea. Some people thought it would be perfectly fine and fast enough. I think both groups of people are wrong :) There are benefits to writing it in a high level language (6-8x fewer lines of code, which means fewer bugs). But it’s also too slow.

      As I’ve written in a couple places, interactive completion functions are “CPU bound” and are already quite slow – easily milliseconds or tens or hundreds of ms, which is definitely human measurable. So you don’t want to slow those down by 100x.

      So a shell doesn’t have to be fast for all shell scripts, since many of them are waiting on I/O. But it does have to be fast for SOME important scripts.

      This is basically “orthogonal persistence” for a shell, and I see the appeal. But I do think there is a reason that those kinds of systems haven’t caught on, e.g. in particular because there is an important difference between ephemeral program state and the “contract” of your program. It’s useful to be able to change the ephemeral parts without worrying about breaking anyone.

      1. 3

        https://computers-are-fast.github.io/ (first example is clocked at 68 M /s)

        Score: ⁰⁄₁₆

        I got only the last two right, and was within two orders of magnitude about half the time. Yikes, university does not prepare you with an intuition for actual orders of magnitude, as I suspected.

        1. 3

          Yes, I definitely didn’t know these things coming out of university. Fixing performance bugs in “legacy” codebases is one good way to get intuition for these things. And I still don’t get 100% of them right, but I do make an effort to develop the intuition.

          If you haven’t already seen them, it’s worth going over “latency numbers everyone should know”:



          I think it’s worth memorizing at first, but as you develop software, you will encounter different systems that are bottlenecked by different things, and they will be made “real”, so you don’t have to memorize them.

          And it’s worth adding other important operations to that hierarchy – particularly context swtiches (single digit microseconds) and interpreter bytecodes (ten to hundreds of ns)!

          Honestly if you Google for this you’ll get a lot of disparate resources that aren’t that well presented… one thing I would like to do is distribute some shell scripts to measure these on a commodity Unix box! I think it is mostly possible.

      2. 1

        The performance problem can be addressed by having an in-memory cache and saving to disk only when the interpreter exits.

        As for the ephemeral part, I think you are talking about separation between API and implementation, right? That should still apply unless a third party starts messing with the content of the directory. My idea of how it would work was rather to create an ephemeral directory for each run of the program, a directory that would be private for the person running the program.

        1. 3

          I guess I’m saying that it’s useful to have both memory and disk as separate things. I’ve read about a bunch of systems that have tried to get rid of the distinction, under the name “single level store” or “orthogonal persistence” / transparent persistence. Eros OS is one:


          Urbit is another recent one:


          I see the appeal, but the I don’t think the benefits outweigh the costs.

          I do think the fork() idea is interesting. In particular I wondered if it would be useful to fork across machines by saving your state, copying it to another machine, and then resuming on a different code path from that state, much like Unix fork(). I prototyped two versions of a format called “OHeap” that can do that.


          I have shelved it for now, but it was supposed to replace Python’s marshal, pickle, and zipimport modules, which are mostly redundant.

          I think that simply having primitives to start fresh processes on other machines is more important than being able to use fork(). There is a conceptual appeal but I could not find much practical motivation.

          1. 3

            I’ve read about a bunch of systems that have tried to get rid of the distinction, under the name “single level store” or “orthogonal persistence” / transparent persistence.

            Other examples that may be better known for SLS, many of which were used in practical systems:

            • Palm OS (for classic Palm PDAs, databases exist in memory)
            • Aegis (the Apollo Domain kernel, actually not “persistent’ SLS - it’s more everything is an mmaped file instead of persistent objects)
            • Genera (the OS for Symbolics Lisp Machines)
            • Phantom OS (Russian research/hobbyist OS)
            • IBM i (the OS of the AS/400, pointers also function as capabilities. Probably the most famous example of SLS)
    3. 7

      This… actually sounds both cool and implementable.

    4. 3

      This kind of thinking is going to be incredibly important in the next 10 years with Optane-alike persistent RAM.

    5. 4

      Just a note, you can easily implement something like this as a typed EDSL in Haskell. Type inference and everything. You’ll probably just have to use something like a program counter instead of that TODO file idea.

    6. 2

      This is awesome and someone should make it. It could be a fantastic teaching tool.

      Entertainingly, all the properties you get from this are properties that already exist with regular machine code programs. You can stop and start them, copy them while running, checkpoint and roll them back… its just harder. Its very interesting to see how magical it feels when all the hard things become easy.

    7. 1

      leveraging a file system like this reminds me a bit like gnu hurd, or plan 9.

    8. 1

      Extending plan 9’s env device (http://man.cat-v.org/plan_9/3/env) would be a good place to start, likely combining it with proc (http://man.cat-v.org/plan_9/3/proc). To make filesystem namespace more useful, you really need per process namespace (http://man.cat-v.org/plan_9/1/bind) like in plan 9.

      Now the backup system feels really close to what LISP has, if you do it at process level. In other words, call/cc at file system level, which also affects processes per namespace, would be really nice.

      Next step would probably be removing the distinction of RAM and disks, and GC at whole system storage level.

      One can dream.

    9. 1

      I’m a big fan. Such fantastic implications from such a simple design decision.