1. 118

Midweek fun question, welcome to everybody who wants to try something new.

Blank sheet of paper, tabula rasa, fuck POSIX, burn bridges.

You’ve been given the chance to help design a new OS. What sorts of things are you looking for? What new (or stolen) features are going in? What are you looking to avoid? Are you making something that feels like ’nix, or something that has blotter paper in the man pages?

I’m curious what people come up with.

Guidelines:

  • Let’s limit ourselves to servers and desktops–no microcontollers, no mainframes.
  • It’s totally okay to focus only on one particular part of the OS.
  • You can be as detailed as you want to be, and aspirational requirements are as worthy in this exercise as running implementations.
  • If you want to specify a particular processor architecture, go for it. If you need something that doesn’t exist, suggest it.
  • If you cite prior art (say, factotum ) link to it for the enjoyment and edification of others.
  • Don’t assume we have to support legacy software or programming languages. At all.
  • Asssume we have graphics cards and network cards, and that vendors all found Jesus and decided to cooperate with you on developing firmware and drivers.
  • Assume you can’t trust users to be nice to each other, but if you want to break that assumption to make a more interesting/elegant design, just mention it.
  • I’m generally interested in more low-level stuff like scheduling, filesystems, messaging, syscall stuff, but if you have a burning desire to talk about using React on top of the NT kernel go for it.
  • This shouldn’t need spelling out, but this is meant as a play space for ideas, so don’t be assholes to each other. If you disagree with a design decision, seek clarity and assume the person had a good reason for it. If somebody disagrees with you, assume they’re not trying to dunk on you.

So Lobsters, what’ve you got?

If you’re not going to reply with concrete ideas or suggestions, or if you’re going to talk about how there’s no room for a new OS, or how the effort should be spent elsewhere…please keep it to yourself (or PMs). This is meant to be a thread for those with dreams, madness, hubris, and more than a little mischief in their hearts.

  1.  

  2. 40

    Something other than “everything is bytes”, for starters. The operating system should provide applications with a standard way of inputting and outputting structured data, be it via pipes, to files, …

    Also, a standard mechanism for applications to send messages to each other, preferably using the above structured format when passing data around. Seriously, IPC is one of the worst parts of modern OSes today.

    If we’re going utopic, then the operating system should only run managed code in a abstract VM via the scheduler, which can provide safety beyond what the hardware can. So basically it would be like if your entire operating system is Java and the kernel runs everything inside the JVM. (Just an example, I do not condone writing an operating system in Java).

    I’m also liking what SerenityOS is doing with the LibCore/LibGfx/LibGui stuff. A “standard” set of stuff seems really cool because you know it will work as long as you’re on SerenityOS. While I’m all for freedom of choice having a default set of stuff is nice.

    1. 20

      The operating system should provide applications with a standard way of inputting and outputting structured data, be it via pipes, to files

      I’d go so far as to say that processes should be able to share not only data structures, but closures.

      1. 4

        This has been tried a few times, it was super interesting. What comes to mind is Obliq, (to some extent) Modula-3, and things like Kali Scheme. Super fascinating work.

        1. 3

          Neat! Do you have a use-case in mind for interprocess closures?

          1. 4

            To me that sounds like the ultimate way to implement capabilities: a capability is just a procedure which can do certain things, which you can send to another process.

            1. 5

              This is one of the main things I had in mind too. In a language like Lua where closure environments are first-class, it’s a lot easier to build that kind of thing from scratch. I did this in a recent game I made where the in-game UI has access to a repl that lets you reconfigure the controls/HUD and stuff but doesn’t let you rewrite core game data: https://git.sr.ht/~technomancy/tremendous-quest-iv

          2. 1

            I would be interested in seeing how the problem with CPU time stealing and DoS attacks that would arise from that could be solved.

          3. 17

            Digging into IPC a bit, I feel like Windows actually had some good stuff to say on the matter.

            I think the design space looks something like:

            • Messages vs streams (here is a cat picture vs here is a continuing generated sequence of cat pictures)
            • Broadcast messages vs narrowcast messages (notify another app vs notify all apps)
            • Known format vs unknown pile of bytes (the blob i’m giving you is an image/png versus lol i dunno here’s the size of the bytes and the blob, good luck!)
            • Cancellable/TTL vs not (if this message is not handled by this time, don’t deliver it)
            • Small messages versus big messages (here is a thumbnail of a cat versus the digitized CAT scan of a cat)

            I’m sure there are other axes, but that’s maybe a starting point. Also, fuck POSIX signals. Not in my OS.

            1. 5

              Is a video of cats playing a message or a stream? Does it matter whether it’s 2mb or 2gb (or whether the goal is to display one frame at a time vs to copy the file somewhere)?

              1. 2

                It would likely depend on the reason the data is being transferred. Video pretty much always fits into the ‘streaming’ category if it’s going to be decoded and played, as the encoding allows for parts of a file to be decoded independent of the other parts. Messages are for atomic chucks of data that only make sense when they’re complete. Transferring whole files over a message bus is probably a bad idea though, you’d likely want to instead pass a message that says “here’s a path to a file and some metadata, do what you want with it” and have the permissions model plug into the message bus so that applications can have temporary r/rw access to the file in question. Optionally, if you have a filesystem that supports COW and deduplication, you can efficiently and transparently copy the file for the other applications use and it can do whatever it wants with it without affecting the “original”.

                1. 4

                  Which is why copy&paste is implemented the way it is!

                  Many people don’t realize but it’s not actually just some storage buffer. As long as the program is running when you try to paste something the two programs can talk to each other and negotiate the format they want.

                  That is why people sometimes have odd bugs on linux where the clipboard disappears when a program ends or why Powerpoint sometimes asks you if you want to keep your large clipboard content when you try to exit.

            2. 13

              Something other than “everything is bytes”, for starters. The operating system should provide applications with a standard way of inputting and outputting structured data, be it via pipes, to files, …

              It’s a shame I can agree only once.

              Things like Records Management Services, ARexx, Messages and Ports on Amiga or OpenVMS’ Mailboxes (to say nothing of QIO), and the data structures of shared libraries on Amiga…

              Also, the fact that things like Poplog (which is an operating environment for a few different languages but allows cross-language calls), OpenVMS’s common language environment, or even USCD p-System aren’t more popular is sad to me.

              Honestly, I’ve thought about this a few times, and I’d love something that is:

              • an information utility like Multics
              • secure like seL4 and Multics
              • specified like seL4
              • distributed like Plan9/CLive
              • with rich libraries, ports, and plumbing rules
              • and separated like Qubes
              • with a virtual machine that is easy to inspect like LispM’s OSes, but easy to lock down like Bitfrost on one-laptop per child…

              a man can dream.

              1. 7

                Something other than “everything is bytes”, for starters. The operating system should provide applications with a standard way of inputting and outputting structured data

                have you tried powershell

                1. 4

                  or https://www.nushell.sh/ for that matter

                2. 4

                  In many ways you can’t even remove the *shells from current OS’s IPC is so b0rked.

                  How can a shell communicate with a program it’s trying to invoke? Array of strings for options and a global key value dictionary of strings for environment variables.

                  Awful.

                  It should be able to introspect to find out the schema for the options (what options are available, what types they are…)

                  Environment variables are a reliability nightmare. Essentially hidden globals everywhere.

                  Pipes? The data is structured, but what is the schema? I can pipe this to that, does it fit? Does it make sense….? Can I b0rk your adhoc parser of input, sure I can, you scratched it together in half a day assuming only friendly inputs.

                  In many ways IPC is step zero to figure out. With all the adhoc options parsers and adhoc stdin/out parsers / formatters being secure, robust and part of the OS.

                  1. 3

                    I agree wholeheartedly with the first part of your comment. But then there is this:

                    If we’re going utopic, then the operating system should only run managed code in a abstract VM via the scheduler, which can provide safety beyond what the hardware can.

                    What sort of safety can a managed language provide from the point of view of an operating system compared to the usual abstraction of processes (virtual memory and preemptive scheduling) combined with thoughtful design of how you give programs access to resources? When something goes wrong in Java, the program may either get into a state that violates preconditions assumed by the authors or an exception will terminate some superset of erroneous computation. When something goes wrong in a process in a system with virtual memory, again program may reach a state violating preconditions assumed by the authors, or it may trigger a hardware exception, handled by the OS which may terminate the program or inform it about the fault. Generally, it all gets contained within the process. The key difference is, with a managed language you seem to be sacrificing performance for an illusory feeling of safety.

                    There are of course other ways programs may violate safety, but that has more to do with how you give them access to resources such as special hardware components, filesystem, operating system services, etc. Nothing that can be fixed by going away from native code.

                    No-breaks programming languages like C may be a pain for the author of the program and there is a good reason to switch away from them to something safer, in order to write more reliable software. But a language runtime can’t protect an operating system any more than the abstractions that make up a process, which are a lot more efficient. There are of course things like Spectre and Meltdown, but those are hardware bugs. Those bugs should be fixed, not papered over by another layer, lurking at the bottom.

                    Software and hardware need to be considered together, as they together form a system. Ironically, I may conclude this comment with an Alan Kay quote:

                    People who are really serious about software should make their own hardware.

                  2. 38

                    I hate how all of these threads devolve into a Plan 9 hagiography. No one ever brings up the fact it encourages strings in the most string hostile language and encourages wasteful de/serialization from/into strings, which is hard to make secure or fast, on top of being a poor abstraction.


                    I can speak for interesting ideas from platforms I’ve used:

                    • Consistent command vocabulary. You can guess commands simply by knowing verbs and nouns. DCL (from VMS) and CL (from IBM i) offer this as the scripting/interactive language. The grammar is also predictable. With DCL, many commands can be entered as kind of a “subshell”, so you can script them with the same subcommands as you do one-offs. Imagine something like this:
                    $ git
                    add file.c
                    commit -m "updated"
                    push origin master
                    $ 
                    

                    That’s how it is in VMS for essentially any complex command.

                    • A help system worth a damn, so people don’t have to use Stack Overflow as their first line of defense. DCL on VMS has easy to read documentation with examples and drilldown into arguments, instead of infodumping you pages of roff. IBM i has context sensitive help - press F1 over any element on screen (including arguments, keybindings, etc.) and get an explanation of it.

                    • Single-level storage. This pops up in a lot of research/non-Unix systems, but there’s two different implementations; one is the Multics/Domain like system of having files/segments worked with in memory at ephemeral locations (basically like Unix, but all files are mmaped), and single-level storage where the system has a single address space with objects in it having fixed addresses, even across reboots. IBM i is the latter, and probably the most successful example of such a system. You stop thinking of files and things in memory as separate and realize paging breaks the barrier down between them. A pointer to a file and the file itself are the same thing.

                    • To actually make this secure, IBM i actually is a capability system - maybe not quite object capabilities, but you have capabilities that only the kernel can provide you and you can’t forge them. Tagged memory is used to implement this - another example of object-capabilities with tagged memory would be BiiN.

                    • Programs aren’t native code, but stored as machine-neutral bytecode. This allows binary backwards compatibility going back to 1980 on IBM i, but it has precedent with things like Mesa. It also allows for the kernel to reoptimize programs and enforce (the trusted translator is how the capability system is enforced) and improve security after compilation.

                    • You can pass pointers, integers, and floats to programs. Because IBM i has a single address space, buffers from other programs are valid. You don’t need environment variables or temporary files as much. The lines between a program and function call are blurred.

                    • Virtualization as a first-class citizen. VM invented virtualization and blurs the line between virtual machine and process. It’s multi-user, but users get a VM running a single-user operating system. The IPC between VMs/processes (same thing on VM) are things like connecting virtual card decks/punch readers between them. Ever booted Linux from virtual punch cards?

                    1. 1

                      Programs aren’t native code, but stored as machine-neutral bytecode.

                      This is quite tricky. C code is not architecture neutral after it’s run through the preprocessor, let alone after early bits of the compile pipeline. WebAssembly has recognised this by baking the pointer size into the bytecode and, as a side effect, backing the notion that pointers are integers into the abstract machine. With CHERI, we make pointers into a 128-bit type that is distinct from integers and enforced in hardware. Nothing in a bytecode that allows pointer arithmetic and bakes in a pointer size can take advantage of this.

                      1. 1

                        In the bytecode for i programs (MI), pointers are 128-bit too, and pack a lot of metadata in there. They’re not integers either. The in-kernel translator crunches them down to the machine length. (That way, it went from a 48-bit pointer world to a 64-bit pointer world.)

                      2. 1

                        I don’t know if guessing commands is something people should be doing, let alone the operating system supporting

                        1. 6

                          Why not? If the command space is sensibly laid out like in IBM i, where every create command behaves the same way, every “work with” command behaves the same way, the individual commands stop being distinct commands and instead become specializations of a single command. Thus, even though “work with members”, “work with objects”, “work with libraries”, and “work with line descriptions” are implemented as separate commands, they are essentially a single “work with ” command. The result is that instead of having to remember m*n commands like on Unix/Windows/etc, you can remember m actions and n types of thing, and automatically know how to submit your request to the system.

                          1. 2

                            That, and when you do type a command you guessed but aren’t familar with, press F4 for a form of its arguments, and press F1 (help) or F4 (prompt) over each argument you’re unsure about.

                        2. 1

                          Realizing this isn’t your point, but I recommend gitsh by George Brocklehurst, if you want to run git as a subshell. Saves me several milliseconds of typing git each day!

                          1. 1

                            Neat! Glad to know people are independently discovering DCL from first principles every day. (I don’t mean that as backhanded sarcasm either.)

                        3. 25

                          Sane atomic file I/O, guaranteed by the OS to just work. This: https://danluu.com/deconstruct-files/ should not have to exist.

                          1. 6

                            At the very least, consider providing filesystem barriers and cache flushes as separate operations.

                          2. 22

                            I feel there may be too much focus on the technology. I think that every other aspect of an open source operating system project deserves equal consideration. Here are some examples:

                            • Licensing - I want the strongest copyleft possible. Personally I believe AGPLv3 or greater should be used for everything, on the off chance that thin clients come back and we all end up using someone else’s computer, like Google Stadia. Linux only going so far as GPLv2 is a mistake. The trend towards permissive licensing in communities like Rust means that OSes like Redox are MIT licensed. If that’s what you like, fine, let Apple or whoever steal your work without contributing back. I’d like there to be one OS where I have confidence that any contributions I make will remain meaningfully open source, no matter what legal or technological tricks corporations come up with.
                            • Finance/Business - I want a business model that isn’t a startup under pressure to deliver world domination to its investors. I also don’t want an organization without a plan for making sure its developers can eat, otherwise open source becomes a playground for privileged people who don’t have to worry about their next meal. Reliably supporting a small group of developers indefinitely would be much preferable. Making incentives line up with the community’s goals is always tricky, and deserves thought/experimentation. Personally I think some combination of selling support, open hardware designed to work with the software, subscriptions and crowdfunding should work decently, but surely there is room for creativity.
                            • Community/Governance - Ultimately, what I think is most important is ensuring that the project cannot be sold out, depriving the community of all of the energy and resources poured into it. I’m still traumatized from LiveJournal’s collapse. How best to achieve this is debatable, but I think that over-relying on a single person makes the bus factor too small, and I think developing a healthy community where no one person is irreplaceable is better both for the contributors (who can check out without guilt if life circumstances get in the way) and the project. I don’t think the BDFL model is ideal, for all of the reasons that democracy is preferable to dictatorships in the physical world. A community that succeeds in including people often underrepresented in OS design/dev, such as UI/UX designers, could be truly revolutionary.
                            1. 6

                              A community that succeeds in including people often underrepresented in OS design/dev, such as UI/UX designers

                              Designers like to get paid for their work. There are innumerable rants online about how they’re constantly cajoled into providing work for free “for the exposure.” In my experience UI designers are well represented in OS development … but that’s because my OS experience was at Apple. (You know, that place everyone’s stolen UX work from over the past 35 years … tongue slightly in cheek.)

                              Seriously, I’m not sure why designers are less enthusiastic about contributing to OSS. Maybe because they aren’t paid as highly as programmers in their day jobs, so they have less leeway to do work in their spare time?

                              1. 5

                                Probably not really what you mean, but just in case you don’t know about it, GenodeOS is dual-licensed “AGPLv3/commercial” (contributions to mainline require CLA). Backed reportedly by a commercial entity, though it kinda looks as if it was connected to a university (though I may be wrong). There seems to be some kind of community, so I guess at least in theory, in case the company suddenly closes door, the AGPLv3 gives the rest among them some chance to try and fork away.

                                1. 1

                                  It’s nice to know that at least one OS project agrees with me on licensing somewhat! (Although I am suspicious of dual-licensing.) Thank you for mentioning it.

                                2. 5

                                  don’t think the BDFL model is ideal, for all of the reasons that democracy is preferable to dictatorships in the physical world

                                  A dictator can work great with a software project, because there is a clearly defined scope and goal with a project from the get-go. There are loads of companies in the “physical world” being driven by dictators, and it is perfectly normal and legal.

                                  A community that succeeds in including people often underrepresented in OS design/dev, such as UI/UX designers

                                  For such a big task as an OS, it is hopeless to target the mainstream. I don’t know what you imagine when you say UI/UX, but I don’t think graphical user interfaces are worth the experimentation budget. If that’s what you wanna do, you can just create another window manager.

                                  I feel there may be too much focus on the technology.

                                  OS’es are inherently technical, the term refers to the lowest levels of whatever runs on your computer. There is no way to make the lowest level be high-level. It’s like saying you want fine dining but all you have is a knife and a goat.

                                  1. 3

                                    There are plenty of projects with fantastic technology that failed in other areas, meaning that their cool technology didn’t matter. Their tech didn’t become widespread and/or reach its full potential, not because the technology wasn’t good enough, but because of something else.

                                    This is like focusing exclusively on cooking a meal without worrying about shelter, water or sanitation. Obviously you can’t have dinner without food, but your dining experience could still be a failure for many other reasons, e.g. the roof caves in and buries your entree in plaster.

                                    I’m just asking people to consider the other aspects that make a software project a success, and not get tunnel vision about the technology. Keeping the big picture in mind is more important than any of my specific proposals for doing so.

                                3. 18

                                  I like to think about these things, but don’t have much hope. Here are my points:

                                  • Networking shouldn’t be an afterthought. Distributed computing should not be as difficult as it is. Transparently interacting or using resources from other systems should be something you don’t have to think about. I don’t care about hardware. I don’t care about CPU architectures. I don’t care about GPUs. I don’t care about drivers. All computers form an transnational turing machine.
                                  • Object capabilities should be a primitive concept. Imagine sharing a screen: That shouldn’t be the hassle it is, you should just be able to give someone read access to a segment or the whole display. The same applies to Files (but we probably shouldn’t have files), Hardware access, etc.
                                  • Hypertext should be a everywhere. The web has shown how attractive the idea is, but browsers are cursed to contain it, which is getting harder and harder. Project Xandu had good ideas about this, and HTTP is a shallow copy. We need complex links that can point to miscelanious parts of the system, and ideally also have back-references. You probably want a lof of cryptography for something like thise, to avoid the centralisation of power.
                                  • Logic and UI should be separate. Unix programms regard the standard output and input as the default UI, everything else is a side effect. Instead we should have the ability for a program (or procedure, algorithm, …) to produce complex data, that doesn’t only mean something in a specific environment (Powershell), but is universally understood. A terminal-like environment could display the results line-by-line, but it should be transformed into a graphical representation using a table, or a graph (or whatever one might come up with later).
                                  • Programming should not be a specialist’s affair. We have two classes of people, those who are at the mercy of computers, and those who can use them. This shouldn’t be the case, because the former are in a much weeker position, getting lost, getting overwhelmed, and sometimes even abused by those who know better. A proper operating system cannot be based on the lie, that you don’t need to know anything to use a computer: To be a responsible user, you need to know some basics. A simple programming language (I would like something like Scheme, but that’s just be) should be integrated into the system, and the user shouldn’t fear it. It’s a direct link to the raw computational power that can be used.

                                  In some sense, I like to think of it like Plan 9, without the Unix legacy, but that seems to simplistic. The interesting thing about Unix, is that despite it’s limitations, it creates the fantasy of something better. Inbetween it’s ideal power and it’s practical shortcomings, one can imagine what could have been.

                                  1. 14

                                    Programming should not be a specialist’s affair. We have two classes of people, those who are at the mercy of computers, and those who can use them. This shouldn’t be the case, because the former are in a much weeker position, getting lost, getting overwhelmed, and sometimes even abused by those who know better. A proper operating system cannot be based on the lie, that you don’t need to know anything to use a computer: To be a responsible user, you need to know some basics. A simple programming language (I would like something like Scheme, but that’s just be) should be integrated into the system, and the user shouldn’t fear it. It’s a direct link to the raw computational power that can be used.

                                    I think the ultimate problem is that most people don’t want to program. They want to accomplish a task, and for the most part, someone else has programmed the tool to accomplish the task. They don’t want to build the tool. Us freaks who want to write tools are few and far. It’s trhe same reason cars have mechanics.

                                    1. 5

                                      I don’t think that programming has to be the same as “building the tool”, but more along the lines of what @spc476 mentions with Excel. Especially when you take “Logic and UI should be separate”, one can imagine that programming doesn’t even have to mean “writing text in a text editor”, but could be a GUI afair, where you work on connection tools in a graphical representation, trivially connecting components of your system, without depending on another tool.

                                      Yes, not everyone want’s to be a car mechanic, nor do I, but to drive a car you need to get a drivers license, and that is the reason we can assume people can take some basic responsibility. We don’t have that for computers, and that’s why the responsibility has to be delegated to Microsoft or Apple. If we want to think of a computer as a tool, not a toy, I argue that a basic understanding for computational thinking should be assumable, and would help everyone.

                                      1. 3

                                        I sincerely believe enso (née Luna) has a serious fighting chance to fill this gap. Though they’re taking their time :)

                                    2. 10
                                      • Networking: QNX was network transparent. It was wild running a command on computer 1, referencing a file from computer 2, piping the output to a program on computer 3 which sent the output to a device on computer 4. All from the command line. The IPC was fast [1] and network transparent, and used for just about everything.
                                      • Hypertext: The only operating system I know of that uses extensive form of hypertext is TempleOS (I don’t think it’s HTML but it is a form of hypertext) that extends pervasively throughout the system.
                                      • Logic and UI: There are bits and pieces of this in existence. AmigaOS has Rexx, which allows one to script GUI programs. Apple has (had?) something similar. Given that most GUI based programs are based around an event loop, it should be possible to pump events to get programs to do stuff.
                                      • Programming: True, but there is Excel, which is a programming language that doesn’t feel like one. Given an easy way to automate a GUI (similar to expect on the command line), and teaching people that computers excel (heh) at repeated actions could go a long way in giving non-programmers power.

                                      [1] In the early-to-mid 90s, I had friends that worked at a local software company that wrote and sold custom X Window servers. Their fastest X Window server ran on QNX.

                                      1. 3

                                        Programming: True, but there is Excel, which is a programming language that doesn’t feel like one. Given an easy way to automate a GUI (similar to expect on the command line), and teaching people that computers excel (heh) at repeated actions could go a long way in giving non-programmers power.

                                        One program idea I’ve had was a spreadsheet that users could “compile” into a simple gui. Analysts already use it as an adhoc RAD tool. Why not give them an actual custom GUI for their efforts?

                                        1. 3

                                          There was something like that in KDE, it was called krusader or something like that.

                                      2. 7

                                        Transparently interacting or using resources from other systems should be something you don’t have to think about.

                                        Then everyone will run headlong into the fallacies of distributed computing, unfortunately. This is why things like CORBA and DistriibutedObjects failed. Networking is not transparent, much as we would like it to be.

                                        At least not in a normal imperative programming paradigm, like RPC. You can get a lot of transparency at a higher level through things like async replication, e.g. Dropbox or [plug] Couchbase Mobile. But even then you have to be aware & tolerant of things like partitions and conflicts.

                                        1. 4

                                          Programming should not be a specialist’s affair. We have two classes of people, those who are at the mercy of computers, and those who can use them.

                                          Indeed, the power dynamics are way out of control in software today. On the orange site, @akkartik describes this well via an analogy to food production: nearly all software today is restaurant-style, while almost none of it is home-cooked.

                                          For anyone interested in this topic, I would suggest looking into the Malleable Systems Collective (the Matrix room is most active) and Future of Coding communities, as it comes up in those places regularly.

                                          1. 4

                                            Your first point is pretty much what Barrelfish is designed for, go check it out!

                                            1. 3

                                              Programming should not be a specialist’s affair.

                                              Jonathan Edwards has been working on this problem for a long time. It goes well beyond the OS.

                                              1. 2

                                                The same applies to Files (but we probably shouldn’t have files)

                                                Could you elaborate on this? Why no files?

                                                Logic and UI should be separate. Unix programms regard the standard output and input as the default UI, everything else is a side effect. Instead we should have the ability for a program (or procedure, algorithm, …) to produce complex data, that doesn’t only mean something in a specific environment (Powershell), but is universally understood. A terminal-like environment could display the results line-by-line, but it should be transformed into a graphical representation using a table, or a graph (or whatever one might come up with later).

                                                There was an interesting newsletter post about emacs being interface independent. I’m not too familiar with emacs, but it struck me as an intriguing and beautiful idea.

                                                1. 5

                                                  Could you elaborate on this? Why no files?

                                                  Maybe it’s clearer, if I say file system. It might be too much to throw out the concept of a digital document, but I have come to think that file systems, as we know them on POSIX systems, are too low level. Pure text, without hyperlinks would be a wierd thing in an operating system where everything is interconnected, and why directories shoudln’t have to be a simple tree (because tools like find(1) couldn’t do proper DFS in the 70’s), but instead could be any graph structure of sets, or even computed.

                                              2. 14

                                                seL4 is the peak of operating system design, so let’s assume that’s the basis. If your “OS” can be implemented on top of L4, than it should be. The fundamental security layer for a system should not need updated every few months.

                                                Complex systems, which will inevitably exist, will need constantly changed to account for constantly-changing user requirements. “Able to run most popular applications that have existed throughout history” is a good sign that your system will be able to host anything that exists in the future (not perfect, but predicting the future never is). This should be possible without horrible hacks. This does mean that I’m kind of rejecting your premise of being tabula rasa, because otherwise, how would I know if it was any good?


                                                The filesystem is probably the part of POSIX-y and Windows-y systems that I dislike the most, because there’s an impidence mismatch when sqlite tries to build on top of it. Abstraction inversions like dividing a file into blocks are a good sign that the OS-provided abstraction is too high level. My ideal operating system essentially caters to the needs of databases, and treats the directory tree as a cooperatively-maintained database just like sqlite itself is.

                                                In my ideal operating system, the OS itself would only really prescribe an interface where a process is given access to an undifferentiated region of bytes. The interface between the logical block storage service and the application is a movable, mmapped window into a larger storage area. To sandbox a subprocess, you take your logical access capability token, and use a service call to turn it into a capability token that only gives access to a subsection of what you currently have access to, like how memory is managed in basic L4. It would also expose primitives to take a volatile, CoW snapshot of an arbitrary region of the process’s accessible block storage area, so that an application that just wants to read the data can do it without having to copy it all into memory by hand (with danger of torn reads), and primitives to make small, atomic, reads and writes to the storage area.

                                                A process can also take a non-atomic read-write mmap of a block storage region. Two processes that do this can use CPU-level atomic memory ops to engage in shared memory concurrency without the block storage service being in the hot path at all, and it would only be involved if the system is low on RAM and it needs to flush it to disk (just like Linux’s block cache does). The only way to guarantee that any of this gets written to permanent storage, however, is to make a sync call to the logical block storage service.

                                                This sort of P2P protocol is how directories would be implemented, as they would simply be a library that runs in your own address space. The directory would be structured as a B-Tree, so updating the directory would be done by mmapping in some free space, writing a copied, but changed, version of the applicable node, syncing it, and then performing a “write if equals” service call to replace the old node. Or, more likely, it would go through a journal first, to allow a fast path where updating a directory is just a write and a sync without having to copy B different nodes (it should have amortized O(1) complexity).

                                                The point of this radical design is to eliminate the current-day arbitrary tradeoff of “lots of small files” vs “one huge file” that applications like Git have to deal with. Both would have similar performance (acknowledging that they also have the same weaknesses: if your application doesn’t use the battle-hardened directory library, you can wind up corrupting the directory tree).

                                                This design also precludes some of the fancier permission systems that POSIX-y operating systems use; if you want more fine-grained control, use services and message passing layered on top instead, like an RDBMS. A POSIX subsystem might be implemented as a service on top of this. Similarly, the content-hosting processes of a web browser would only interact with the filesystem through a broker process that actually uses the directory library to get at files; due to the insufficiently-sophisticated permissions systems in Windows and in POSIX, this is how they wind up working anyhow.


                                                The general rule, that follows from the above “filesystem” (really “logical block storage”) design, is that there should be no strings in the operating system ABI. Ever. If you’re designating special characters, then you’re doing it wrong, and if your operating system is prescribing a text encoding, then it’s too high level to be properly future-proof.

                                                1. 2

                                                  seL4 is the peak of operating system design

                                                  That is the exact opposite of what I’ve heard from everyone who has ever tried to write software that runs on top of seL4.

                                                  1. 1

                                                    That’s because “ease of writing software on top of it” is not the goal that I think operating system kernels should optimize for. They should be optimized for security, performance, and accommodating language runtimes like the BEAM, the JVM, web browsers, Smalltalk, Ruffle, that regular application developers actually want to use anyway.

                                                    The operating system that I want isn’t a target for application developers any more. It’s a target for runtime developers. And since it’s probably a bit much to expect one runtime to serve all applications perfectly, an ideal operating system would be a bag of drivers, plus the bar minimum necessary multiplexing infrastructure to allow hosting more than one runtime at once.

                                                    1. 1

                                                      Even things like allocating memory are painful on seL4. A lot of the security guarantees that seL4 makes are easy to verify because it punts the difficult bits to things on the other side of the system call boundary. I honestly don’t care if my kernel is secure if the trade-off is that it’s impossible to write secure userspace software on top.

                                                      How many of those language runtimes that you want have been ported to seL4? Last time I checked, none of them, because it’s an incredibly hard target for even a POSIX C library, let alone anything more complex.

                                                2. 10

                                                  The ability to create arbitrarily-deeply-nested process namespaces with easily-configurable access to compute resources. Something like FreeBSD jails or docker containers, but embedded into the central design of the process model so deeply that doing the equivalent of spinning up a sandbox with an isolated file system and network address is just how you ordinarily spawn any program. Ideally spinning up a new nested copy of the original OS itself should be just as easy.

                                                  1. 11

                                                    This is how Plan 9 works, more or less.

                                                    1. 2

                                                      This reminds me of one of @crazyloglad’s principles:

                                                      An application does not get to disturb you by ‘asking for permission’ just to steal data from a sensor – it always gets some kind of data. You decide, dynamically, which sensor that is actually sampled and what that entails. Access the ‘camera’ does not automatically imply sampling the actual device, it means being routed a video stream. The user decide what that stream covers and when.

                                                      1. 1

                                                        I would argue null, or no data should be a perfectly valid “always gets some kind of data” return. Otherwise I like it.

                                                        1. 1

                                                          One problem with “null” is that you can easily filter it out. the client ‘knows’ that it is being played and it is cheap to detect. A simple case I worked with years back is that well, certain kinds of adtech used accelerometer data to determine travel paths through a city (in android, at the time, you could just read the sensor, no permissions asked). Imagine you are angry with this behavior, and instead of writing a blog post you want to inject doubt into what they are doing, and if that happens at scale, erode trust in their model. Do you inject ‘null’ or do you provide other left/right patterns that overlay somewhere entirely irrelevant to you?

                                                          1. 1

                                                            Fair enough. Perhaps null/no data could be random data in the requested input format, something like quickcheck or Python’s Hypothesis provides. Then we get free testing to boot! :)

                                                            The problem with your approach is, it then devolves into a fight between adtech and privacy people each trying to further their agenda. At least with null/no data, the intention of the user is very clear: no accelerometer data for you.

                                                            1. 1

                                                              noise isn’t much better (entropy @0.5) there are stronger models to plugin to each possible sensor, and can be done much more cheaply than the adtech analysis side (there are fundamental problems with this approach to deception), ‘fighting back’ is more of a HCI problem (hence why I try to address it) on how to opt-in/control it. It will always be a cat and mouse game, no different to drm and similar battles - but right now, it’s a free-for-all, it’s all pure, uncut data.

                                                              1. 1

                                                                agreed, though hypothesis and quickcheck are not pure random data(as they have to fit within some constraints). I agree that adtech models could probably figure it out pretty easily however.

                                                                I also very much agree that the free-for-all that is letting adtech companies just siphon off all this data is definitely not doing humanity any favours, and I’m glad you are trying to address it. You have obviously thought much harder about it than I have.

                                                                I just worry about where this cat and mouse game would be in a decade or two of active fighting.. it won’t be anywhere even close to fun. I’d rather the incentives were changed(through technology, laws, etc) to more align with the betterment of humanity.

                                                                I have no idea what the possible incentive changes might be, I’m not remotely involved in the adtech space, except being stuck having to use a product with it included sometimes. I’m pretty sure GDPR like things probably isn’t the right answer though.

                                                                Anyways, thanks for thinking about and working on the problem!

                                                                1. 2

                                                                  If you want to dig further into it, https://www.youtube.com/watch?v=KNOlqzMd2Zw is a good talk. I don’t think his data-union approach is a real solution, but well, before any aggressive response to the current state of affairs should be mounted, there needs to be some kind of palatable alternative prepared or the vacuum might well be filled with something worse.

                                                    2. 19

                                                      Plan 9 is the peak of operating system design, so let’s assume that’s the basis. Anyone who is thinking about OS design and who hasn’t used Plan 9 has insufficient credentials for the job. Let’s not fork the code, though, the LPL is bad. Also, a micro- or hybrid-kernel is the right call in $CURRENTYEAR.

                                                      Per-process namespaces, union filesystems, and the kernel interface should come along for the ride. 9P is good for network transparency, and that would be wise to keep, but it also bears acknowleding that shared computing has lost and making some design changes to improve single-system performance. These features make Plan 9 do containers 10x better than any of the others (be it Linux, or even Solaris or BSD), at 1/10th the complexity, and with much more utility.

                                                      Factotum is good, but let’s expand the concept. Require FDE, prompt for a username and password on early boot, use it to decrypt the disks, and also use it to handle login and opening up a keyring. It would also be nice to expand Factotum with separate agents which grok various protocols, perform authentication for them, and hand the connection off to a client - so that they never have to handle the user’s secrets at all. ndb is also my preferred way of configuring networks - and the file format is a vast well of untapped potential for other applications - but it should be modernized to deal with ad-hoc networks more easily. Planned networks should come back into style, though.

                                                      The rc shell is excellent, but can be streamlined a bit. The C API is not great - something a little bit closer to POSIX (with the opportunity to throw a bunch of shit out, refactor like mad, fill in some gaps, etc) would be better. The acid debugger has amazing ideas which have had vanishingly little penetration into the rest of the debugger market, and that ought to be corrected. gdb, strace, and dtrace could all be the same tool, and its core could be just hundreds of lines of code. The marraige of dtrace and acid alone would probably make an OS the most compelling choice in the market.

                                                      ZFS is really good and something like it should be involved, and perhaps generalized and expanded upon.

                                                      Introduce a package manager - my favorite is apk but if you can come up with a much simpler approach to what nix and guix are trying to do, they it might be a good idea.

                                                      The graphical-first, mouse-driven paradigm has been shown to be a poor design in my experience. Ditch it. Make the command line and keyboard operation first class again, and then let’s try something other than rio/acme/etc for the UI. Neat experiements but not good in practice.

                                                      1. 3

                                                        @ac tries to “come up with a much simpler approach to what nix and guix are trying to do” with janet-based hermes & hpkgs.

                                                        1. 3

                                                          If your new OS needs a package manager, your new OS is broken by design. Package managers are made to handle accidental complexity which the original design failed to handle.

                                                          1. 13

                                                            What? You’ll have to expand on that a bit.

                                                            1. 2

                                                              I think that the comment above is coming from the macOS point of view: “just unpack a new app into /Applications/”. It might make sense to package end-user applications in another way than system-wide libraries and utilities.

                                                              Dependencies and reusing system libraries are definitely not accidental complexity in my view.

                                                              1. 1

                                                                macOS is able to do this only because it bundles gigabytes of libraries with the OS. It gets away without a package manager by, effectively, putting most of the dependencies for all applications in a single package that’s updated centrally. Even then, it doesn’t work so well for suites of applications that want to share libraries or services, which end up providing their own complex updater programs that manage library dependencies for that specific program.

                                                              2. 1

                                                                On the contrary, I think you need to expand on why a package manager is needed, then identify what problems it solves, and then eliminate those problems. Running and installing programs is an integral task of an operating system.

                                                              3. 4

                                                                Package managers allows users to install curated packages, while also (in an ideal world) never mess up the system. These are nice properties.

                                                              4. 2

                                                                How about FIDO keys instead of U/Ps?

                                                                1. 2

                                                                  I like that you can store a U/P in your brain (for the master password, at least), and I don’t like that FIDO can be stolen. But a mix would be cool.

                                                                2. 1

                                                                  I’m not sure about your idea of dropping graphics and the mouse. Visual selection plus keyboard shortcuts seem like an annoying way to run/plumb. Or maybe you just want Vim?

                                                                  1. 3

                                                                    I don’t want to drop graphics and the mouse – I want to drop Plan 9’s single minded obsession with graphics and the mouse being the only way to use the system.

                                                                3. 8

                                                                  More thoughts separate from my other thought on existing systems people don’t think much of: Interfaces could be better.

                                                                  • The classic Macintosh is as far as I’m aware, the only mainstream system that had absolutely no command line. There was no excuse to have a bad GUI. Maybe Star/ViewPoint counts. While I wouldn’t go as far, it proves you can have a fully functional and useful system without such a concept.
                                                                  • Kill the VT100. Command lines shouldn’t be limited by curses, but display rich objects too. This complements what people are talking about with structured IPC. Emacs shows ideas of how this can work, albeit in a crude manner.
                                                                  • If your system has consistent commands (or some kind of reflection capabilities on them), I should have IDE-like prompting of arguments and values. Tab complete is insufficient. IBM i has F4 to prompt for values anywhere, imagine how consistent that could be if it were like Visual Studio.
                                                                  • Some kind of way for applications to communicate better. Scripting-wise, AppleScript is probably the most popular example that works globally, though VBA is more popular for application-local. AppleScript had a bad language, but the model was incredibly powerful. If end-users want to program their computer, this is how they do it.
                                                                  • Enrich drag-and-drop. RISC OS puts such emphasis on it that it’s basically interactive pipelines.
                                                                  • Rich embedability in documents. Remove distinctions between different types of documents or even folders. See BTRON, OLE on Windows, ViewPoint.
                                                                  1. 8

                                                                    I would love to see an OS designed for making testing and debugging user-space as effortless as possible.

                                                                    First, it’d be incredible to have something like rr embedded within the OS. Processes could be run to be fully deterministically, with OS nondeterminism like scheduling decisions being scriptable or pseudorandomized by the test driver. We’d have no more flaky tests, and the OS would actively cooperate with programmers for finding concurrency bugs where the program relies on a particular schedule.

                                                                    Then, with user-space fully determinized, we can rely more on property-testing (a la QuickCheck) for easily improving program correctness. Current fuzzers in this space do sophisticated binary analysis and use different hardware counters for exploring different program paths, and the OS is in the unique position of being able to abstract over these. I’m imagining something like SAGE being offered as an operating system service. Then, your program is deterministic by default, and you can ask the operating system for test data that it intelligently synthesizes with binary path exploration techniques!

                                                                    Finally, with all of these features, the OS could also provide an amazing debugger that allows developers to time travel through program executions and visualize their results. The Pernosco folks have done some really cool work exploring what debugging can look like with deterministic replay.

                                                                    1. 8

                                                                      I think about this a lot. I agree with some others here that Plan 9 is the peak of operating system design, but I think some things I’d like to see (in random/brain storm fashion):

                                                                      • Expand on the idea behind Erlang and processes having a common messaging behavior. Almost everythin becomes “gen_server” like. I know Plan 9 has this idea of everything as a file server, but abstract that a little bit and create the same interface across the board.
                                                                      • Isolate hardware and software, meaning put everything into a fence that users can be given access to them. Jails and Solaris Zones are inspirations here, but go further. I know people hate it but the fact that MacOS requires a user to grant access to areas of their system is what I’m after, but go further and do it with hardware/kernel level structures as well. And I don’t want to be prompted, but it is something that I can configure (if that makes sense). To the point where as a power user I could drop this “fence” config onto my cluster and completely lock down everything except what I explicitly make available for processes to run. Almost like OpenBSD’s pf but for the security of my userspace.
                                                                      • ZFS but more workflows like git/fossil/mercurial. Make it easy to branch a path and “archive” it. Focus on the delta between states, allow me to snapshot/branch “a point in time” and just diverge into this reality without impacting my main files.
                                                                      • Functional programming methodology in tooling. What I mean by this pretty terrible phrase is something like the command line in pipes but as a language across the system. Complex data flows (streams, bytes, strings, objects, etc) enter tools and I can process and filter them and produce something else. Almost like IFTT and pf at a system level. This builds on the “everything is a gen_server” idea above.
                                                                      • Elevate the terminal and “user facing” commands more. Think Alfred, Quicksilver, but it is the terminal and user friendly so my mother-in-law used it. “open” is a tool that acts on data, that data might be an application, an image, a URL, etc. Again, think functional. :)
                                                                      • Finally make it so that things like dtrace/ebpf are just standard behavior, not these complex hooks into a system, but a part of the standard method for writing process, that they naturally produce that info that these types of tools can dig into.
                                                                      • Oh one more thing, packages/package manager should isolate all installable items (think Nix). And don’t choose some special “bucket”, use standards like tar/gzip. I don’t want to have to have special tools to peer into some package.

                                                                      That is it for now. Just ideas, things I like today but that are not ubiquitous across a system.

                                                                      1. 4

                                                                        If there’s a thing to take from Erlang, I think it’d be to use structured data (tuples, atoms, ints, strings/bitstrings) in messages, not just a stream of bytes like unix does. I think an OS where programs communicate via Erlang’s data structures would be much better and richer than anything built on unstructured bytes.

                                                                      2. 7

                                                                        This thread is so full of interesting ideas, I’m impressed. I noticed many people are very focused on aspects of the OS low-level, which is cool. On my own personal dreams, I don’t think too much about those details, I tend to focus on high-level stuff and that is what I want to share with you all.

                                                                        I miss Newton OS. Yes, that OS. I miss the ideas in it. Whatever mobile computing became, it is a completely different path than the one the Newton series of devices were traveling. I miss that path a lot even though I recognize the limitations of those systems (all of which could be solved with today’s hardware and software advances).

                                                                        • No applications: this is not exatcly true, you had applications, it is just that you could also extend applications and developers were keen to extend more than to ship siloed apps. For example, a PIM package was more likely to extend the features provided by the default address book and email applications than ship completely new apps. A good analogy is that installing a “package” on your Newton was akin to teaching it new tricks. With a curated selection of packages and configuration, you could really tailor your experience to match your needs.
                                                                        • Newton soups: no need for a filesystem when you have a system-wide graph database.
                                                                        • Simple programming language: Packages could be built with NewtonScript which for me feels like a JS cousin from the Pascal branch of the family tree. The important aspect is that it was a simple language to learn and you could provide a rich cocoa-like set of libraries, resources, and tools for this language so that development becomes very easy. Not unlike Swift and SwiftUI. Make a high-level language give it all the OS features.
                                                                        • User focused: Most of our “UX mental model” today is app focused or task focused. You launch an app for a specific task. I want to go back to being user-focused and by that I mean that the device provides features and the user makes whatever usage that fits their needs. An example:
                                                                          • There is a note taking feature on the Newton. It is a master/detail UX (like much of the Newton itself) with a list of entries and the ability to go into a specific entry for editing.
                                                                          • Installing a new project management package, doesn’t create “an icon to launch a project manager” (which is how it would work on an app focused OS like iOS), it extends the note taking feature with new “stationary”, a new form-type view with the fields necessary for managing projects.
                                                                          • You keep using the same note taking app, you just use it to add and manage your projects. New forms are also added to the calendar and other features.
                                                                          • The OS is extended and the user is responsible to make whatever use they want. It is about you and how you want to work and not about apps.

                                                                        The Newton for me is the peak of personal computing, with emphasis on the personal part. Over time each device became more and more tailored for their unique user and the way they liked working. If we add in-device development, then each user can also create their own packages which can lead to it becoming even more bespoke.

                                                                        A small OS for a personal experience, that is what I dream of. I bought a Raspberry Pi 400 with the hope that something like what I described can be built for it, not unlike an eMate 300 kind of experience. I don’t have the knowledge to do it, I’d probably just make the whole “OS experience” a collection of apps on top of Linux since low-level is not my stuff (the way webOS did). I’m very inspired by SerenityOS and how Andreas Kling is documenting his journey developing it. Maybe I can create a little proof-of-concept of what I’m talking about. Dreaming is very good, I do it a lot, more than I should to be honest, but at some point either I act on it or nothing changes.

                                                                        1. 7

                                                                          Here are some of the features I would like to have in a new OS.
                                                                          I have listed OSes that implemented a feature in bold.

                                                                          • File locking and record locking are well-behaved.
                                                                          • It’s not the default but real-time scheduling is available. IRIX
                                                                          • Networking is zero-copy and uses Van Jacobson’s network channels.
                                                                          • Instrumentation such as DTrace or ETW is pervasive and it’s easy to add it to applications. Solaris
                                                                          • Commands use regular lexemes, e.g. every command to create an object starts with crt. i/OS
                                                                          • Command line options use consistent arguments, e.g. -v is always verbose.
                                                                          • The shell can complete command line options; applications have metadata that indicates what options are permitted.
                                                                          • Bounds checking is cheap and everything uses it. SOS
                                                                          • Overflow checking is cheap and everything uses it.
                                                                            There’s a pragma to disable it for hash functions and such.
                                                                          • File associations use MIME types. BeOS
                                                                          • Checkpointing and session recovery is provided as a library. NOS
                                                                            It’s easy to add support for it to applications.
                                                                            You can stop an application on one computer and resume it on a different one.
                                                                            This is an evolved version of dropfiles.
                                                                          • APIs come with static analyzers to identify issues.
                                                                          • APIs come with testcases, e.g. you can run your application in a mode where memory allocations randomly fail.
                                                                          • Filenames are case-insensitive, case-preserving, and limited to printable characters.
                                                                            When was the last time you had filename.txt and Filename.txt in the same directory?
                                                                          • Applications can be run in a debugging mode where system calls log the address of the caller, arguments, and the timestamp in a circular buffer.
                                                                          • APIs are versioned and use semantic versioning.
                                                                            Old versions are removed on a regular schedule to reduce technical debt.
                                                                          1. 6

                                                                            As someone who has spent the past few years building a new OS for $DAYJOB I have no shortage of thoughts and experiences.

                                                                            One particular thing that I’ve been struck by is how much existing software exists that people want to run. And while you can emulate basic POSIX IO fairly easily in practice complex (ie: interesting, useful) software depends on both specific hard to emulate semantics and performance patterns.

                                                                            An example of the first is atomic file renames. One of the few reliably atomic operations in POSIX is a file rename but this can be hard to implement if you’re doing IO in interesting ways. If you offer a POSIX-like IO API but don’t guarantee atomic renames then you’ll end up with subtle, hard to reproduce data loss when using a lot of software that’s developed for POSIX systems.

                                                                            Second, on Linux in particular and POSIX systems generally it’s extremely cheap to repeatedly stat the same file. Directory entries are cached in the kernel and the syscalls to look them up are relatively cheap. If you’ve got a microkernel where every stat call is an IPC to a filesystem server a stat call is more expensive. Software like git assumes that stat has a very low cost. Now maybe git could be implemented in a way that doesn’t assume that stat will be cheap but it’s been developed, profiled and optimized on Linux.

                                                                            1. 6
                                                                              • L4 style microkernel
                                                                              • Zero-copy all the IPC as much as possible — pipes as ring buffers in shared memory, etc.
                                                                              • ABI not based on C! We can’t use #[repr(Rust)] because that’s not stable, but we can have some documented stable representation that can do ADTs and stuff
                                                                              • Capability-based everything, no global namespaces
                                                                                • programming languages really need to step up their game with regards to this, cap-std for Rust is the first serious work in this space \o/
                                                                              • Pervasive hardware resource virtualization (SR-IOV) and direct HW access / “kernel bypass” (netmap) instead of shared graphics/networking stacks, bye bye “tcp sockets” and all that
                                                                                • basically e.g. I click to start a web browser, my desktop session uses its “gpu-parent” and “net-parent” capabilities to spawn a new vGPU and vNIC (the kernel would do SR-IOV to handle that), the kernel returns capability descriptors for them, desktop spawns the browser with these capabilities (plus something like wayland socket, audio device, downloads directory, etc.), the browser uses netmap style calls to map the GPU and NIC buffers into its own address space and uses libraries to work with them as easily as in our current existing systems (imagine Mesa would just have a backend that touches the GPU that’s mapped into the current process directly instead of going through libdrm which goes through syscalls; and for networking just run a stack like smoltcp)
                                                                                • we could do this for storage too? with NVMe namespaces? but actually the shared hierarchical filesystem that we have currently is soooo convenient :/ oh and it fits capabilities well (openat etc.)
                                                                                • for audio we need a server daemon to make swapping a playing stream to a new device possible, but how about we at least kernel-bypass that, so that the future “pulse/pipewire” would have soundcards’ buffers/registers directly mapped into its address space
                                                                                  • though an SR-IOV soundcard that would mix the output of all the VFs is a fun concept to think about
                                                                              • First-class user session and seat management right in the kernel. The fact that virtual terminals are still involved in gui sessions is the most awful part of unix
                                                                                • (who am I kidding, nowhere near as awful as signals, signals are literally the worst)
                                                                              1. 6

                                                                                Better abstractions for storage. I was going to say “files” — because it’s terrible how many hoops you have to jump through just to update a file reliably — but really, the filesystem itself is an idea whose time has gone.

                                                                                I’m fascinated by things like the Apple Newton’s “soup”, a rich data store kind of like a simple object or graph database, that all applications shared. It lets you represent the kind of complex structured data found in the real world, like address books and email, in structured schema that make it globally useful, without having to write a bunch of single-purpose APIs like the iOS/Mac AddressBook framework. From there you can go on to add replication features and a really-global namespace, like IPFS…

                                                                                Networking needs a do-over too. Not so much at the abstraction level, but better APIs. One of the things I only really learned this year is what a nightmare it is to write real-world TCP networking code on the bare-metal POSIX APIs. It looks simple at first — socket, bind, connect, send, recv — but doing it well requires reading a stack of fat hardback books by Richard Stevens. And don’t even get me started about adding TLS!

                                                                                1. 4

                                                                                  I still really regret the way the NeXT folks really took over Mac OS development. Yes, classic Mac OS was a train wreck, but there were still a lot of really good ideas inside Apple that were shunned because of the acquisition.

                                                                                  1. 4

                                                                                    I was there at the time, and argued a lot with the NeXT folks, but later decided they were mostly right. And of course the marketplace agreed.

                                                                                    ’90s Apple had some great ideas, but they weren’t implementable on an ‘operating system’ made out of popsicle sticks and rubber bands. Their major effort at a new OS (Pink/Taligent) was too blue-sky and expensive, and trying to make old and new app APIs coexist in a single process (Copland) was doomed to fail.

                                                                                    1. 2

                                                                                      I do feel like the politics of the acquisition (which I was late to, starting there in ’04) were really toxic, and a lot of babies were thrown out with the classic bathwater. Oh well.

                                                                                      1. 6

                                                                                        a lot of babies were thrown out with the classic bathwater

                                                                                        I hear you there, the classic Mac OS had plenty of problems, but it also had a lot of great features that you just don’t have in even the most advanced modern operating systems.

                                                                                        Take for example the GUI first approach to file and system management, you could install an entire OS with drag and drop. You could keep multiple operating system versions on one drive and switch between them by simply renaming a directory and rebooting. In the older smaller versions you could simply drag your operating system onto a floppy disk and suddenly it was available in the startup disk selector. You could go into a GUI utility to create a RAM disk, drag and drop your OS onto it, then select it as your startup disk and reboot and be running entirely on RAM, then unmount your hard disk for super low power computing on a Powerbook.

                                                                                        1. 2

                                                                                          I always resented file extensions, being an old classic guy.

                                                                                    2. 2

                                                                                      Modern Apple is Next wearing an Apple skinsuit. I noticed a lot of people who toe the party line over at Apple were the ones from the Next acquisition, whereas the ones less so were there before.

                                                                                      1. 1

                                                                                        Yeah. I started there in 2004, so feelings were still pretty raw. There were epic arguments about e.g. file extensions, which … yuck.

                                                                                  2. 5

                                                                                    Common object formats and a system runtime that isn’t C first and foremost. Like the VMS of old you should be able to mix and match languages at will.

                                                                                    Just as there should be some kind of blob object format that the system can add apps to import and export formats in a workflow, not as strict as the Amiga, more like COM with iunknown. As a matter of fact, COM everywhere, and over the network. …

                                                                                    How did MS get it so right and yet so wrong? They didn’t go far enough with ODBC, they were almost there with media player codecs, but nothing like that for still images, they again had codecs for audio and video.. nothing for 3D. And relying on 1:1 ipv4 really made DCOM deployments a living hell. They fixed a lot of that with .NET remoting, but seemed to have given up.

                                                                                    XML was the best data format as it explains itself, but the python people screwed up trying to manipulate directly and not using any data abstraction and now we have absolutely inferior JSON.

                                                                                    It’s all a mess.

                                                                                    1. 5
                                                                                      • It should transcend machines. Ideally, the OS should have an immutable part and a mutable part (my data + all installed programs). That way, I’ll be able to push and pull around my workspace across machines.
                                                                                      • It should have no processes. Processes are just hidden data. Let’s just have functions instead. All programs shall be functions (that is, return something).
                                                                                      • Program composition capabilities. If programs are functions we can chain them. Similar to pipes but simpler to reason about.
                                                                                      • File extensions are broken. File extensions are an afterthought to file’s having an associated program. I’d call this file types instead and store it in a separate field.
                                                                                      • Much, much better filesystem. Atomic transactions. Plus structured data similar to a DB.
                                                                                      • File level data deduplication built into the filesystem.
                                                                                      • Undo/redo for the entire filesystem. Even for installing programs. If I’m unhappy, I should be able to undo.
                                                                                      • Allow branching, merging, similar to Git.

                                                                                      All the above are ideas I had in mind for a long long time but all of them are reality now, it’s called the Boomla OS. I hope that’s not cheating just because I’ve made it real. :P

                                                                                        1. 1

                                                                                          Ha indeed! The one on package isolation is also a big one I am also totally on the same page with you! Just like hardware / software isolation.

                                                                                        2. 1

                                                                                          Some of the bits about branching/undo/mutability might be resolvable with ZFS. The UI for it could be much better though.

                                                                                          File extensions are broken. File extensions are an afterthought to file’s having an associated program. I’d call this file types instead and store it in a separate field.

                                                                                          The original Macintosh had creator types.

                                                                                        3. 5

                                                                                          Let’s go for my dream desktop then.

                                                                                          A user interface which doesn’t rely on good luck and processing speed to avoid annoying the user.

                                                                                          • Don’t focus steal because you took a while to launch and the user got bored, but then you popped up and they pressed Return/Space/Esc and just cancelled / agreed to something / no idea what happened.
                                                                                          • Don’t get in the way of the scrolling / task switching / actual work the user was doing because whatever it was that wasn’t keeping the user interface snappy was more important.
                                                                                          • Don’t cause typing latency to drop below … whatever a good threshold is.

                                                                                          Design for this system being owned and used by one person. A personal computer. If you don’t have to worry about user A being naughty and accessing user B’s RAM/files/sockets, perhaps that frees up time to work on making the thing pleasant to use.

                                                                                          1. 4

                                                                                            Don’t cause typing latency to drop below … whatever a good threshold is.

                                                                                            I’m not sure that would be a welcome feature. :D - Maybe above a threshold?

                                                                                            1. 2

                                                                                              Oh good call!

                                                                                            2. 3

                                                                                              I believe BeOS/Haiku at least tries to optimize for “being owned and used by one person”.

                                                                                            3. 4

                                                                                              I suspect that the “why” is more important than the “what” here, so I’ve tried to include some of both.

                                                                                              Things I’d want to see either implemented or explored in a new OS:

                                                                                              • A focus on seamless (less than 3 seconds), extremely fast app installation that doesn’t require admin permission:

                                                                                                …which is basically what browsers do, except we call them “webpages” instead. Browsers are very comparable to an OS and has gained quite a lot of marketshare off Windows/Linux/etc and desktop desperately needs to learn (some of) its lessons.

                                                                                                Other things that BrowserOS does really well: * Trivial cross-device syncing and support * Support for every device that matters * Inherent network transparency * ALWAYS up to date! (it kind of cheats with online-only requirements though, and I remember people constantly complaining about Facebook UI changes back when I still used it)

                                                                                                I think is important for lowering barriers to entry - it’s generally much faster to check out a website than it is to install and run a program. And by extension, this makes it more likely people will buy into the ecosystem for new reasons and stick around.

                                                                                              • Built-in payment system: So today, we pay for EVERYTHING online - either with tangibles like money, or with intangibles like ads and data. Why are intangibles preferred? Convenience, probably - intangibles are universally supported on every browser unless you install an adblock or something, and are literally zero-click and always have been. So intangible payment means you don’t need to require registration etc.

                                                                                                And more broadly, Free Software is fundamentally about tilting developer’s incentives toward helping the user. It does this by giving the user power, and “where the money comes from” is by far the biggest source of power around.

                                                                                                We’re currently relying heavily on corporate funding, but: 1) that means Free Software is still beholden to the donor corporation(s) power-wise 2) corporate software doesn’t actually help individuals, and a corporation’s Freedom ultimately isn’t very important - employees don’t have power over their employer’s IT system anyway, so enterprise software being Free isn’t inherently very important for individuals (we mostly just care about the side-effects). 3) the stuff corporate software cares about is often poorly suited to general consumer use-cases - your home server does not benefit from easily spanning 3 continents, and the requirements of extremely-scalable software often makes the software much harder to maintain by 3 volunteers in their spare time. This isn’t new, here’s an xkcd from 2009 commenting on it: https://xkcd.com/619/

                                                                                                In other words, corporate funders are temporary allies, not friends.

                                                                                                So, we need to make it as convenient as possible for random member-of-public users to send money, and the best way to make it convenient IMO is to ensure it’s supported by the OS OOTB.

                                                                                                In particular, I think distros should natively support paid Free Software in their repos - plenty of Free Software devs sell GPL’d software, where they provide the source code gratis but sell the convenience of pre-compiled binaries. Distros constantly undercut that, which is legally allowed but at the cost of undercutting one of the few direct-from-user funding models that aren’t “DONATE MONEY PLS”.

                                                                                              • Native “faceted” ID system (more “explore” TBH):

                                                                                                People have different facets of their identity - e.g. you act differently in bed than you do with a child (if otherwise please tell the cops) deliberately avoid using the same account for e.g. linkedIn and pornhub, because they present different facet of their identity in those two scenarios, and they should be kept separate.

                                                                                                By “faceted” I mean having an identity tree or DAG, where your root identity has power over child-node identities and can prove ownership, but (most?) outsiders can’t prove any relation by default.

                                                                                                So ideally, what this means is that you don’t need to create a new account for anything - the program/website just perhaps asks for permission to grab an autogenerated pseudonymous account and you click okay, and off you go, zero barriers to entry.

                                                                                              • A first-principles re-design of the hardware input device:

                                                                                                Keyboard and mouse were chosen for historical reasons. Software is designed around using your existing keyboard and mouse, and the keyboard and mouse are used so as to be able to use the software - a chicken/egg problem.

                                                                                                KB+M have two main problems; they’re hard to use without a surface to rest them on, and the keyboard lacks discoverability for its shortcuts. The touch-screen is one step forward one step back, as touch-screens don’t have tactile feedback (i.e. you can’t feel where your fingers are)

                                                                                              • A multi-device “operating system”:

                                                                                                So nowadays people might have several of: * A desktop * A laptop * A tablet (iPad etc) * A phone * An e-reader (maybe) * A smartwatch * A smart-TV * etc etc etc

                                                                                                Yet most Linux distros don’t have any sort of over-arching system to handle them OOTB. There’s stuff like NextCloud, which is a massive pain to set up on all devices, and it really seems like there needs to be standard software infrastructure to handle connections between your set of trusted devices in a convenient fashion.

                                                                                                I think it would be useful to separate device-specific config from device-independent config (like “disable wifi because this specific laptop’s wifi driver leaks” VS “disable wifi because I have a system that queues up network tasks and does them all in one go for better battery life”. IDK.

                                                                                                Honestly, I’d settle for someone coming up with a name for a multi-device “personal nexus” from a device’s operating system like FreeBSD. Using the term “operating system” for two different concepts sucks. If someone more educated than me has a Proper Name for what I’m describing, please go ahead and mention it, as long as it’s not “operating system”.

                                                                                              • A better migration system:

                                                                                                To be fair, this isn’t really new or anything academically sexy, it’s just something that’s typically mediocre. Apparently Apple does it quite well, but I can’t say personally.

                                                                                                The more your OS relies on people configuring it rather than having one-size-fits-all defaults, the more important an easy and seamless migration system is. Also, the more people tend to upgrade their hardware or buy hardware they expect to replace in an average of 2 years’ time because the battery et al are not replaceable, the more important an easy and seamless migration system is.

                                                                                              • The freebie: a better documentation system:

                                                                                                Based on the theory that there are four types of documentation (and man pages don’t consistently provide more than 1 of the 4, and doesn’t cleanly separate them): https://documentation.divio.com/

                                                                                                I don’t really have much to add for this one.

                                                                                              1. 4

                                                                                                I think that there should be a strong bias towards exposing program internals, and a way of browsing those.

                                                                                                For example, command line tools should provide enough metadata for its command inputs that you should be able to generate a GUI form to call something like grep based on that metadata.

                                                                                                In a similar sense, though a program might have a “defined entrypoint”, there should be a lot of encouragement to expose alternative entrypoints to use subsystems as needed. For example, be able to directly call a browser’s rendering layer and get some render result/paint to a screen. Maybe more simply, be able to “pry open” the GUI settings menu easily and just find some function you could call directly to toggle certain settings (think Applescript).

                                                                                                I do think the “life & death of javascript” talk also covers some interesting ground. If we say “let’s not support C” (or, perhaps more cleverly, “we support C through a virtualization abstraction a la emscriptem) then we could potentially build a more type-rich system for most programs.

                                                                                                The thought experiment: if every program’s base was written in Python, it would be fairly easy to expose Smalltalk OS-style “dictionary of commands/variables” for various programs.

                                                                                                1. 1

                                                                                                  While probably not going quite as far as you wish, I’d like to point out snakeware as a Python-based userspace.

                                                                                                  I’d also make the point that exposing additional entry points is extra work, and people could get mad at you if you ever need to change them. This could result in change being slow, and rife with backwards compatibility hacks everywhere. Emacs is probably a good example of this, but I don’t actually know first-hand if it is.

                                                                                                  I do very much like the idea that additional entry points should be strongly encouraged, though. Or the general idea of easing “librarification” of large applications.

                                                                                                  1. 1

                                                                                                    On Linux, in theory, a program can also be a shared library. In fact, one can execute libc.so from the command line. And the same can be said for AmigaOS, a program there could also be a shared library, but I never saw one, nor did I ever get around to doing a “proof-of-concept.”

                                                                                                  2. 4

                                                                                                    A while ago I was thinking that command line interfaces are just a poor man’s API, and that a shell is just a poor man’s IDE. In this line of thinking, a program is just a function (or collection of functions. almost like a library though you could also extend the idea to allow programs to save some state between executions, so that it becomes more like an object). In short, it would be nice to unify bash scripting and programming, and provide a shell that is able to display structured data (lists, structures, but also things like tables images, LaTeX, and graphs – and I mean both the mathematical structure and thing you can make in Excel). Also, a nice interactive interface/autocomplete for calling functions would be good.

                                                                                                    1. 4

                                                                                                      Hints that you have it wrong, no matter what you do:

                                                                                                      • Can I screw up other users and/or make it unusable with a fork bomb?
                                                                                                      • Can I turn the system into treacle with over allocation?
                                                                                                      • If I trip over a cable…. and yank the power does it take ages to recover and is anything in an inconsistent / not working state when it reboots?
                                                                                                      • If my neighbouring system dies… does the combine system take ages to recover and is anything in an inconsistent / not working state when it reboots?
                                                                                                      • If a user refuses to grant an app a permission, does it degrade gracefully, or does it sulk and continuously ask for more permissions and refuse to do the things it has permissions to do?
                                                                                                      1. 4

                                                                                                        First-class unicode. UTF-8 everywhere, and include libicu by default.

                                                                                                        1. 4
                                                                                                          Overview

                                                                                                          Okay I’ll try to summarize some of my thoughts.. I am working on some of these problems myself and will publish my mvp here soon-ish. The way I visualize the system is that each computation is a ray of light and the user collects these rays and weaves them into different structures. Some users choose to archive their creations while others trace a more ephemeral existence.

                                                                                                          Functional core, imperative shell

                                                                                                          Declarative system configuration like NixOS and all mutable state is indexed for easy traversal, backups, version- and access control (more on this below). Most importantly, any piece of state should be available for viewing as data, you should be able to save a window, workspace (or other abstractions of the same sort) to disk and then reopen in the same state again later.

                                                                                                          Lisp everywhere

                                                                                                          Anyone should be able to program and the simplest way to accomplish that is to have the simplest possible syntax (only one rule) and a good architecture (to make sure the user is only interacting with safe DSLs at the surface and then slowly they can peel off the layers e.g. expand the macros).

                                                                                                          Explicit Trust Boundaries

                                                                                                          The job of the operating system is to route information around, it’s important to know where information came from and what it’s to be used for. You should be able to trace any information as it moves through the system and you should be able to ask what are the legal “operator” words (in the lisp expression) at any given point in time. As data moves from untrusted to trusted you should be “scrubbing” data of it’s previous owner and having one or more of your identities take ownership of the data by signing it as reviewed.

                                                                                                          Defense in Depth

                                                                                                          You should have control from the very first code that runs on your CPU (HEADS project) and your hardware should be the true platform, like the direction Genode is going in, that is; you should be running your trusted core software on seL4 and on certain CPU cores etc. This will give you a system that allows you to spin up Linux (or some other application platform, maybe chromeOS if you like the web..) on other cores for full hardware separation (where desired).

                                                                                                          Built-in backups

                                                                                                          The network should meld into your computer and you should be able to fully recover your computer from the network via your initial secret (which should never be stored on the computer and only interact with it via provably secure protocols). In practice this means you are also providing this service for others, kind of like a final form IPFS. The public part of your userspace could be a mutual dependency with others who index their state similarly to you.

                                                                                                          Internet Index

                                                                                                          Your indexed state should be a part of one giant CRDT of everyones’ state, i.e. the indexes that you have into “active state”, behaviors of the computer or public data should have convergent semantics so that the whole internet index is formed from all of the users of the internet trying to make sense of what is going on. The local index should be traversible with a simple menu and it should also index sources that others publish their constructions to. This way you can crawl other indexes and extend your own (i.e. “pinned” data). I’m working on this part.

                                                                                                          Web of Trust

                                                                                                          You should be able to choose which sources of information to trust on which topics and the OS should provide you with tools to update your beliefs effectively. All protocols (from email through instant messaging and to systemd-like alerts) should be aggregated, bucket sorted (into the feeds you construct to keep tabs on the information that concerns you) and then ranked (based on the context, i.e. feed it’s in, how much you trust the sources and the priors the sources provide - i.e. their confidence in what they’ve sent out into the world).

                                                                                                          Identity Management

                                                                                                          The OS should be very aware of which identity you want active for the task that you are performing and it should also make it clear which identities have access to which resources. The identities should be domain specific so that others can filter out the parts of you that they don’t trust but subscribe to your ‘high signal identities’ (from their perception). You are of course not obligated to reveal that your identities are all controlled by you.

                                                                                                          Explicit information topology

                                                                                                          Rather than having pipes and strings you should have types and string diagrams. You can still use pipes and strings when you are constructing a single-threaded pipeline-like-thing, this is obviously also expressible as a string diagram and will hook into the whole picture. Any boundary should be introspect-able and redirect-able. Furthermore you should not have to pre-type things, you should be able to build and extend pipelines ad-hoc an then reify them in the ‘type-tetris’ topology when they mature.

                                                                                                          AGPL

                                                                                                          The code for the whole system should be indexed by the system. Every user would be exposed to reading at least some of it. The whole design of the system should encourage reading and reviewing the code/data and the review you would give would be in terms of what you are capable of reviewing.For example a novice user could review if the code feels understandable and should be able to highlight the parts that feel “scary” - while someone more advanced would be expected to give more detailed feedback.

                                                                                                          The system could update it’s belief about the users technical competence based on how he interacts with it and present different (default) reviewing criteria / axis (plural) based on that. Each user would sign their review and publish it as metadata for the reviewed piece of code, this review would then join others on the web of trust and using similar inference as discussed above this could give others the information they need to choose which code needs more thorough vetting (and which code they can probably trust).

                                                                                                          The developers of the system should be attaching confidence levels to the security of a piece of code and marking every side effect in the code base. This would not be a requirement but popular extensions to the system would be reviewed so often that eventually they would be “completely documented”. Over time the trusted computing base would grow and the amount of capable users as well (from extending the excellent documentation / explanation of the code as well as marking every known side-effect).

                                                                                                          Implementation Plan

                                                                                                          Start by building the networked part and have the network build the system from the inside out. The most important (missing) parts are:

                                                                                                          • The convergent index
                                                                                                          • The web of trust
                                                                                                          • The code review tools

                                                                                                          Everything else should follow from this. Coincidentally I have dedicated my life to building these three tools in that order.

                                                                                                          1. 4

                                                                                                            The kernel would be designed for:

                                                                                                            • Capabilities.
                                                                                                            • High assurance.
                                                                                                            • Hard realtime.
                                                                                                            • Mixed criticality.
                                                                                                            • Strong task isolation.
                                                                                                            • Drivers in userspace.
                                                                                                            • Formal proof of kernel’s correctness.
                                                                                                            • High performance IPC.
                                                                                                            • Low overhead.

                                                                                                            In short, to save a lot of work, the new OS would be built on top of seL4.

                                                                                                            With that out of the way, here’s the wishlist:

                                                                                                            • Focus on low latency.
                                                                                                            • Network transparent, architecture agnostic IPC (like plan9).
                                                                                                            • Filesystem concept should be built around the network transparent IPC.
                                                                                                            • Event-driven. (no unnecessary timers/wakeups, battery friendly)
                                                                                                            • Fault tolerance. (like minix3)
                                                                                                            • Datatypes. (like AmigaOS’s)
                                                                                                            • POLA. (e.g. a datatype would be able to receive/send the data in the format it handles in one end, receive/send a canned format in the other, and nothing more. A browser would not have broad fs access; a file dialog running as a separate process would be shown to handle exceptions to that, for e.g. uploads/downloads)
                                                                                                            • Low memory footprint. (i.e. not bloated)
                                                                                                              • System should work on really memory starved computers, such as microcontrollers or 32-bit desktop computers from the 80s.
                                                                                                              • Where memory can be traded for large performance benefits, a knob should exist.
                                                                                                            • Highly cohesive desktop environment. (like beos/haiku)
                                                                                                            • Drag and drop install of applications, drivers and services. (like beos/haiku)
                                                                                                            • Secure input mode. (like nitpicker xray mode)
                                                                                                            1. 3

                                                                                                              I’d love to have something more Emacs-like, in that the full system is totally explorable and manipulable by a user, without any special magic. I don’t want multi-user; I don’t want POSIX.

                                                                                                              I’ve always wanted something like the idea of Taligent, where the distinction between applications and the OS is elided.

                                                                                                              1. 3

                                                                                                                I’ve been playing around with something like this; I found a luajit init which sits on top of Linux, and I got my emacs-like text editor ported to run on it: https://github.com/technomancy/ljos/commit/6d0bc7c303a2a4cf85cfd75022c4c9060b269744

                                                                                                                It’s ridiculously rough, but I like the idea of an OS where I can read all the source and potentially understand everything. (In this case, everything outside the kernel, but still, that’s a huge improvement over what I have today.)

                                                                                                              2. 3

                                                                                                                Assuming we have schemes, have a scheme for configuration:

                                                                                                                Let me read/add/modify configuration through “normal” file APIs, without me having to worry in which format the config is stored.

                                                                                                                1. 3

                                                                                                                  Kinda similar to my “structured data in files” idea. It would be great if operating systems could say “this is the format we use, and this is the format you will also use”. XML, JSON, YAML, TOML, etc. etc. all these incompatible data formats have lost people thousands of hours, not to mention a program on the other side might not understand what you speak.

                                                                                                                  1. 2

                                                                                                                    If my “I don’t even care anymore how your config store looks like underneath, just implement the file API” doesn’t work out, then “you are going to use XML, and I don’t care how much you whine” is the close runner-up approach. :-)

                                                                                                                2. 3

                                                                                                                  A thing I think is kinda neat is to promise future backwards compatibility with dylib/so/dll rather than the actual syscalls. Windows does this with kernel32.dll and AIUI it’s the thing that makes Wine possible as a mere userland thing rather than needing special kernel support like you need for implementing a Linux compatibility layer.

                                                                                                                  1. 2

                                                                                                                    macOS and Solaris do the same. On macOS, libSystem.dylib provides the syscall interface and they change the kernel syscall ABI between minor releases. Go violated this by issuing syscall instructions directly because they didn’t want to support dynamic linking and so ended up with every single Go program breaking the kernel ABI for the gettimeofday system call changed. Solaris provides a dynamically linked libc.so and reserves the right to change the interface between the kernel and libc at any time.

                                                                                                                    This can somewhat complicate the backwards compatibility story. In FreeBSD, the syscall ABI is stable but syscalls are removed in major releases and added back in with COMPAT options. You can build a kernel with or without the compat interfaces. If you build a kernel with COMPAT4 support, it will run FreeBSD 4.x binaries. This is great for jails because you can install the entire FreeBSD 4.x userland in a jail. Almost. Unfortunately, FreeBSD doesn’t make the kernel ABI for control interfaces stable, so things like ifconfig won’t work. If you want a 4.x jail, you need to populate it with newer versions of a few tools. With Solaris, older-version zones come with newer versions of a bunch of libraries for the same reason.

                                                                                                                    Linux goes to the opposite extreme and guarantees backwards compatibility for all of the interfaces. This then has problems such as the ifconfig issue where the underlying interfaces weren’t expressive enough for the newer hardware and it was easier to write a completely new userspace tool. The APIs used by ifconfig still exist on Linux, they just aren’t useful.

                                                                                                                    It’s difficult to put these boundaries in the right place. For example, on Windows there isn’t even a kernel-mode 32-bit compat layer: 32-bit programs run as 64-bit programs that map shim libraries into the program’s address space that do trampoline calls into 64-bit DLLs for any kernel functionality. This can include device drivers. You typically don’t do ioctl-like things directly in Windows, you get a kernel driver and its matching userspace DLL and no guarantees of anything if you try using the wrong version of the userspace DLL, so there’s no guarantee that any given device driver has a stable ABI at the kernel boundary (though the userspace DLLs typically expose versioned COM interfaces for backwards compatibility).

                                                                                                                    1. 1

                                                                                                                      Thank you for the super interesting reply! I did not know about where the boundary was drawn anywhere other than in Linux (syscall layer) and Windows (some dlls).

                                                                                                                      It’s difficult to put these boundaries in the right place.

                                                                                                                      Are there any particular disadvantages to setting the boundary towards the “some dlls” side? I’m thinking, okay, maybe you can’t have purely statically linked binaries any more, but really this just means you can statically link everything except the OS’s compatibility layer. (Which is roughly what you get (*) when compiling programs with GHC: it produces ELF files that have dependencies on libc and any C dependencies you listed, but everything that came from another GHC package got statically linked.)

                                                                                                                      (* I’m not sure if this is still true because AIUI there’s some support for dynamic linking now but I haven’t looked / paid attention in a few years.)

                                                                                                                      1. 3

                                                                                                                        There are some problems with foreign ABIs. The NT kernel makes a lot of assumptions about what’s in a process and these don’t apply to foreign processes. WSL needs a completely different process kind. It’s a bit easier on *NIX where there’s basically only one kind of upcall: signals. An ABI in FreeBSD defines a syscall table, signal layout, initial stack / memory mappings, binary format, and so on (Linux doesn’t have a coherent abstraction for this but implements all of these in different scattered places). With the kernel boundary as a public interface, it’s easy for FreeBSD to implement all of these. It would even be possible to implement a port of Solaris libc or macOS libSystem that use the native FreeBSD ABI and forwarded signals to the user’s ABI. This is roughly what Arm is doing for CHERI Linux at the moment: providing a shim layer in userspace that does the bounds checks and then forwards things.

                                                                                                                        Some of these compat layers are more difficult to do in userspace in POSIX than Windows. In Windows, all OS resources are HANDLEs, which are pointers (though not all HANDLEs are kernel resources: one of the nice things about the Win32 ABI is the ability to have both kernel-managed and userspace-managed resources behind a single interface). In POSIX, they’re file descriptors, which are ints. File descriptors have some quite exciting semantics, such as the requirement that a newly allocated FD is always the lowest-numbered unused integer value. On Windows, it’s easy to wrap things that create a new HANDLE with something that passes back a pointer to the consuming code that points to a data structure that tells you about the underlying type. On *NIX, this requires keeping a shadow FD table in userspace and maintaining all of the semantics of fun things like dup and dup2 in this table.

                                                                                                                        You also bake bits of the linkage model into your userspace ABI. The FreeBSD kernel knows how to load ELF, a.out, and a few other formats of binaries but doesn’t know how to do any linking (for userspace, at least - it does some linking for kernel modules). It loads a dynamic linker from the path specified in an ELF header for dynamic binaries. On Linux, the VDSO is an ELF object and so programs that want to use it must be able to do ELF linking. If Linux ever wants to shift to a different binary format then this gets complex. For the small amount of stuff in the VDSO, it’s pretty trivial, but supporting all of the stuff Windows puts in DLLs that are mapped into every process requires processes to support a lot of complex PE/COFF stuff, so an ELF environment on Windows would require a huge amount of PE/COFF machinery as well as the ELF loader.

                                                                                                                        We’ve done some work on FreeBSD to pull a libsyscalls.so out of libc. This contains the C wrappers for the kernel syscalls and is largely motivated by sandboxing: if we pull it out of the bottom of libc, we can provide a different implementation and run FreeBSD libc on top of something that does CHERI ccalls or whatever to talk to a host environment without allowing sandboxes to talk directly to the kernel. Programs using the same libc.so can run natively or in a CHERI sandbox. In theory, we could even compile libc to wasm and provide a libsyscalls that sat on top of WASI.

                                                                                                                        1. 1

                                                                                                                          We’ve done some work on FreeBSD to pull a libsyscalls.so out of libc…

                                                                                                                          This whole thing sounds neat. :)

                                                                                                                  2. 3

                                                                                                                    I don’t have much to contribute, but this reminds me of the last time I saw someone ask what a new OS should have (https://groups.google.com/g/comp.os.minix/c/dlNtH7RRrGA/m/SwRavCzVE7gJ)

                                                                                                                    1. 3

                                                                                                                      Paging @akkartik on this one, but I think an OS with a deep level of support for test doubles related to IO and other process-external state would be a very nice thing to have. Being able to, for instance, automate a desktop program by having it render graphics to a null device and sending it keystrokes, or having good examples for how to fake out certain types of device drivers (virtual smart cards stand out in my memory from the last year).

                                                                                                                      1. 3

                                                                                                                        I have two wild ideas about OS’s that I never have time to really develop. One is for the graphical interface, the other for the OS structure in general:

                                                                                                                        • Combine graphical applications as if they were layers in Photoshop: one of the key limitations, to me, of graphical interfaces is that they are based around a desktop metaphor, where the screen is an office table and the windows are sheets of paper. This prevents applications from talking to one another and being combined with one another as commands can be using UNIX pipes. (I think Alan Kay pointed this out in his famous talk to OOPSLA). But it is difficult to imagine graphics applications talking to one another if you see them as sheets of paper lying on top of each other. The only way I have ever seen one “functionality” being applied after another “functionality” in a graphics environment are the layers of Photoshop, where a blur is applied on top of a color correction. This is straightforward in the case of graphics processing, but if it were applied to other applications, you could have an application that “corrects the spelling” and another one that “holds notes” and then apply “one on top of the other” as if you were working on a light table, instead of an office desk.

                                                                                                                        • Substitute “everything is a file” with “everything is a URL”: the original UNIX metaphor does not scale well outside of a single server, which is fine because we have learned to string servers together via something called Internet, but what if we applied the larger scale to the smaller scale? Every command from the user is considered a request to be served, all system resources are identified by an unique resource location, and the role of the system is to match requests to resources. In the same metaphor, a filesystem based around database-like tables fits very well (I think BeOS introduced this idea), removing the hierarchical tree of directories. The manual pages can directly be Stack Overflow. That does not mean that everything is a remote request over the network: resources that are requested often can be cached locally, just like a browser does.

                                                                                                                        I hope those two ideas are interesting to anyone. I just wished I had the time to pursue them.

                                                                                                                        1. 3

                                                                                                                          I think one thing that is missing from modern OSes is a “community”-oriented aspect. Most of what people use computers for is to connect with other people. Unices are multi-user systems, but they’re rarely genuinely used that way anymore, and they’re not great when you do. My university offers a unix shell account but I don’t really use it much, because there’s not really anything there: there’s not a chat, there’s not a bbs, there’s no way to discover other users, etc. 9grid is pretty cool on that front, you can share pretty much anything to another user very quickly, share computing resources, all sorts of neat stuff.

                                                                                                                          the closest thing I can think of that does this sort of OS+social network idea is urbit, but I have some ideological issues with the platform that I don’t think can be solved.

                                                                                                                          1. 2

                                                                                                                            I think there’s no community aspect because there’s no inherent community group for the OS to build around - forums are basically just random members of public after all, whereas the original university computers were explicitly built around university members AFAICT (the TUI for useradd explicitly, hardcodedly prompts for the user’s room number IIRC).

                                                                                                                            I don’t see how you could make a sane OS+social network system, without improving the underlying social reality the OS is intended to reflect - if there are no ties binding the community together otherthan mutual interests, then there’s nothing preventing anyone from selling out/only joining in order to spam or push malware.

                                                                                                                          2. 2

                                                                                                                            The essence of the use of an OS is either to be able to run server-like services in a really stable and secure fashion, let users consume contents like games and video or to let developers and other creators make whatever their heart desires. It should also provide enough low-level access so that the networking, measurement and benchmark people can get the insights and control they need.

                                                                                                                            There are guaranteed to be other use-cases as well, but these are the main five I can think of right now. A new OS should, IMO, focus on doing these five tasks really well: It starts with having a bottom layer of excellent logging and debugging facilities (like valgrind, dtrace, dmesg, perhaps a virtual machine that can provide insights. It should be a complete system and feel like a whole), a way to run server applications in containers on top of that (like Docker images, or small virtual machines, or server applications running in a VM), then a separate desktop system where users can consume to their hearts content (like iOS or Android) together with a keyboard-driven desktop environment for developers (like Sway) and a mouse-driven desktop environment for creating things (like BeOS, Amiga and Windows).

                                                                                                                            The user should then be able to switch between the five modes with ie. F1 to F5, at any time, if they have logged in and have the right permissions:

                                                                                                                            • F1 - Logs/stats/debugging/low-level control
                                                                                                                            • F2 - Run, monitor and manage containers and server applications
                                                                                                                            • F3 - Android/iOS like system
                                                                                                                            • F4 - Keyboard driven developer desktop
                                                                                                                            • F5 - Creative desktop

                                                                                                                            There should be a way to seamlessly send (or copy+paste) contexts between these five. Working on writing a program in F4, or an application crashed in F3? Send the context to F1 to debug it there. Browsing a web page in F3 and want to share it with F4? Press something like ctrl+F4 to send the context there. To return to a previous context, list the context history and select it.

                                                                                                                            Just brainstorming here. :)

                                                                                                                            1. 2

                                                                                                                              A lot of comments focus on technical details, many of which I agree with (better IPC/messaging than what we have right now is the big one). But, we use our OS to do things - I’m more interested in the operating environment or shell rather than the low-level details. I recently asked: How might a future OS help us navigate the world?.

                                                                                                                              Hopefully it will enhance our abilities to:

                                                                                                                              • Process incoming information: ingest, relate
                                                                                                                              • Understand information: query, navigate
                                                                                                                              • Act in the world: schedule, plan, react

                                                                                                                              When I’m using a computer to work (vs idly browsing), I typically have several different applications open and I am running through some workflow between them. These apps don’t usually know much about each other. Somehow my OS doesn’t even know which apps I usually open together and how I place them on the screen! I figure that’s the least it should do for me.

                                                                                                                              A new shell should help me organize my tasks and coordinate applications. This requires a supporting ecosystem and incentives for apps to be less black-box, more toolbox - which many of the comments here are addressing. But what does it look like for the less technical end user?

                                                                                                                              1. 1

                                                                                                                                You might be interested in enso (nee Luna) and/or arcan-fe.

                                                                                                                              2. 2

                                                                                                                                Here are some half-baked ideas:

                                                                                                                                • Every action that can be taken (e.g. clicking on a menu item, hovering over a link), is understood as a semantic event by the OS, which can be triggered programmatically, logged, recorded, sent over the network etc.
                                                                                                                                • No mouse only interactions
                                                                                                                                • No keyboard only interactions
                                                                                                                                • Voice assistant that does not require loads of background CPU at odd times or send private things over the network. Voice assistant does speaker recognition, so I have a recorded transcript of everything said and who said it and when in rooms hooked into the system (Iron Man Jarvis)
                                                                                                                                • Every action that the user does results in a visual acknowledgement as an extremely high priority. If the action is queued behind other work, that’s fine, but the UI should indicate that within milliseconds of me instructing it to take the action.
                                                                                                                                • Extreme reliability - no crashes, absolutely no forced reboots.
                                                                                                                                • A core concept within the OS is streams of events. Apps can provide streams. So I can assign most email to a ‘low priority, batched’ stream, email from certain people to a higher priority stream, all IMs from all my different messaging apps to an ‘interactive’ stream. These streams can be separately configured how often they refresh, how they notify, and are viewed. I use built in stream management and visualisation rather than 6 different IM programs, a couple of different email systems, a todo program, two calendar programs and an RSS reader.
                                                                                                                                • there are standard, sane formats for vector graphics, bitmap graphics, plain text documents, rich text documents, audio, video that all relevant apps on the platform support as a minimum, even if they provide their own formats too.
                                                                                                                                • extensive clipboard history
                                                                                                                                • UI should avoid overlapping panes metaphor. Personally I like tabs and panes everywhere, e.g. like eclipse or something like that. If the OS is providing tabs, then there’s no need for every single application to implement them in their own special way.
                                                                                                                                • Filesystem has sane semantics and is reliable, transparently growable, snapshotable, features automatic deduplication, has built in arbitrary metadata storage and editing, including editable metadata templates for different types of files. Like Calibre, or some of the music libraries, but nice to use and for all kinds of files. Again, so we don’t have to keep reimplementing the functionality for ebooks, pictures, music, office documents, etc… If the metadata system is good enough, the metadata provides it’s own browsable hierarchy.
                                                                                                                                • Search is fast and works with content of files of all kinds and doesn’t involve periodically pegging the CPU to index. Pluggable modules allow searching for text in images or audio speech.
                                                                                                                                • Radial context menus of the type that are actually discoverable gesture systems
                                                                                                                                • most user interactions should be declarative rather than imperative when it comes to system configuration, software installation, etc. I should be able to take the description of one system and instantiate it on a different system.
                                                                                                                                • all applications and systems operate within a sandbox and can only access the resources that are provided to them (e.g. files, storage, displays, devices). All resources are spoofable. Deleting an application does not leave anything behind.
                                                                                                                                • Rather than connecting to cloud systems by default, it should provide the equivalent of a cloud system for my other devices to connect to should I want to - in order to share file storage, cpu, network, desktop, memory, etc. Keys are stored across multiple devices including my phone, so I can approve high security things from a different device.
                                                                                                                                • Absolutely no built in default phoning home, cloud services, etc in the basic install. Such things should require user interaction to install. Standard interfaces can be available for storage, backup services, url safety checking systems, that apps can plug into.
                                                                                                                                • Apps do not install their own update managers. The OS is responsible for keeping key packages up to date based on the users preferences.
                                                                                                                                • The OS understands that I might have different configurations, e.g. for work or for gaming, or for working on Project X etc, and when I switch between them, everything switches.
                                                                                                                                • I may have different privacy levels for different displays - certain apps should never display UIs on certain external displays, or during presentations.
                                                                                                                                • Multiuser. Not just sequentially, but concurrently. If I plug in two keyboards, I should be able to choose to assign one keyboard to one user running in one pane, and the other keyboard to the other user in the other pane.
                                                                                                                                • Excellent and easy to use internal sound mixing.
                                                                                                                                • virtual displays that can be panels in my UI
                                                                                                                                • built in programming language
                                                                                                                                • Most apps present a live view of a document that is always persisted, but can also have named snapshots stored rather than having the transient/persistent divide most apps use today.
                                                                                                                                • insane levels of undo/redo supported as trees (not chains that discard part when new steps are taken) throughout all apps and the OS
                                                                                                                                1. 2

                                                                                                                                  I’m bad with coming up with concrete things, but there’s definitely a bunch of things I’d put as my goals.

                                                                                                                                  • it should definitely feel like UNIX, but in a good way. Leave out the historical stuff that just can’t be fixed, but without being sure how to describe, a lot of it just feels right, where Windows just feels wrong. Give me pipes or something like this concept (but maybe with structure/types like PowerShell, but in a nice way, unlike PowerShell). Make it modular (unlike systemd), have a first-class programming language tied in (like sh, but good).
                                                                                                                                  • compartmentalize like Qubes OS, without the downsides of Linux (see above)
                                                                                                                                  • have the possibilites of X, but without the limitations of Wayland (I know, it’s getting there, but making screenshots IS important)
                                                                                                                                  • some better permission system than rwx, but innate and not tacked on like setattr
                                                                                                                                  • hardware drivers shouldn’t be able to crash the system so easily, at least whenever possible
                                                                                                                                  • stuff shouldn’t depend on “the OS kernel”, so maybe a microkernel like a dom0 and every user runs their workspace in one more kernels. Like virtualization, basically.
                                                                                                                                  • I like the concept of window managers, they provide just enough customizability when tacked on the graphics system, there shouldn’t be just one default (looking at you, Windows and OS X)
                                                                                                                                  • it should be open like Linux, but maybe try avoiding the huge fragmentation this time
                                                                                                                                  • the package manager should be good. Maybe stealing the ideas of NixOS and the like would make sense. Use hardlinks to defined sets of working groups of applications with a rollback method at the cost of only disk space
                                                                                                                                  • maybe don’t try to shoehorn power management into the kernel so deeply. I don’t really get why my couch laptop needs to run the same kernel build (like modules and features, version can be the same) as my 24/7 server. I just want the same user-level software to run on it.
                                                                                                                                  • security is important, but some things make using an OS nearly unusable. There’s a reason most people disable AppArmor. I really like the Android permissions model. Maybe try this, but with more fine-grained permissions that can be grouped. (just an example: allow filesystem access to: all/$HOME/(list of folders),($HOME/foo/*.txt)
                                                                                                                                  1. 2

                                                                                                                                    State and file systems are horrid.

                                                                                                                                    I have long wished directories and directory trees would go away and be replaced by a RDBMS like sqlite that you can query.

                                                                                                                                    But the state still flummoxed me.

                                                                                                                                    I really like the ideas behind datomic. Rich Hickey has talked extensively about it.

                                                                                                                                    1. 2

                                                                                                                                      A few thoughts:

                                                                                                                                      • Syscalls should be atomic and “just work”; the caller should not need to check that all bytes were actually written, and if not, try again. There might be room for super-mega-low-level syscalls that behave like the current UNIX ones, but they should not be the defaults.
                                                                                                                                      • Filenames should be case-insensitive and allow only alphanumeric characters, dots, and dashes. They certainly do not need #, &, ;, spaces, and (maybe worst of all) newlines.
                                                                                                                                      • Choose between environment and arguments; they are too redundant to need both. Either way, from the shell they could use something like the current argument syntax, coming after the program name.
                                                                                                                                      1. 4

                                                                                                                                        File names need spaces; they’re intended for humans, not regexes.

                                                                                                                                        1. 2

                                                                                                                                          Syscalls should be atomic and “just work”; the caller should not need to check that all bytes were actually written, and if not, try again. There might be room for super-mega-low-level syscalls that behave like the current UNIX ones, but they should not be the defaults.

                                                                                                                                          ITS did it, famed example of “worse is better”.

                                                                                                                                          1. 1
                                                                                                                                            • Syscalls: How about just in time custom system calls? The whole concept it mindblowing, and it ended up running SunOS programs faster than SunOS on the same hardware. Too bad it was never released as source code.
                                                                                                                                            • Filename: Back in college, some friends and I were in the process of designing our dream operating system, and the filesystem I came up with not only would allow filenames to be arbitrary text, but anything—like an image, a sound, even itself (although that would probably be A Bad Thing). These days, I would probably just disallow control characters in a file name, and if we’re doing a whole new operating system, that assumption can be built in and not hacked about.
                                                                                                                                            • Environment vs arguments: I think environment variables have a place. They’re useful when you need to specify some value, but don’t want to do it via arguments, or it would be inconvenient each time. If a program wants to display some text on the console, it can use $PAGER to get the program to use. Or if you want to edit something, it can launch $EDITOR. And it works when your preferred shell might not support aliases or functions.
                                                                                                                                            1. 1

                                                                                                                                              Data belongs to the user. Therefore, the user should be allowed to name their data whatever they like. The mundane nonsense of a \0 terminated string bubbling up and interfering with a user’s ownership of their own information is an insulting failure of abstraction.

                                                                                                                                            2. 2
                                                                                                                                              • Capabilities
                                                                                                                                              • Inspectable structures instead of blobs everywhere
                                                                                                                                              • Transactional storage
                                                                                                                                              • Anything relying on a filesystem (hierarchical path space) can be given a virtualised filesystem
                                                                                                                                              • Anything relying on a filesystem can perform watches without penalties
                                                                                                                                              • A protocol for terminal-like behaviour, but with typed data instead of plain text mixed with escape codes and the like
                                                                                                                                              • A fully realised graphics stack where commands and buffers can be moved across the network, so you can move GPUs closer to displays for multi-display bandwidth, and also have better network-transparency in the UI layer
                                                                                                                                              • Owner-control-oriented: absolute refusal to implement anything that allows media companies to disable functionality on the local device. if that means you can’t watch netflix, who gives a fuck? use another device for that.
                                                                                                                                              1. 2

                                                                                                                                                A clean standard framework for roles:

                                                                                                                                                • device owner
                                                                                                                                                • scoped device administrators
                                                                                                                                                • desktop user
                                                                                                                                                • services users

                                                                                                                                                The owner gets to delegate admin rights to commercial services providing support; the owner might be you, as the desktop user, or it might be a corporation, making sure that support desks get to do things.

                                                                                                                                                If hardware belongs to me, I get total control over what gets to happen with it. If hardware doesn’t belong to me, but I’m using it on behalf of someone else, then I need to play by their rules. There are a number of sociological power structure implications here; rather than bury our heads in the sand and pretend that there are no consequences, it would be good to build frameworks which are easy to comprehend, where people can know and acknowledge or refuse the trade-offs they accept.

                                                                                                                                                Once you have these separations of concern, it becomes easier to say that Provider A gets to issue updates to web-browsers and other stuff, but doesn’t get to pull arbitrary data from outside of the apps’ constrained storage spaces. Provider B gets to manage system configuration, so that I always get up-to-date copies of Unicode, timezone databases, all the other foundational data which most people never think of. Provider C runs encrypted backups, diffing across the wire. The file-system allows for efficient delta tracking of multiple point-in-time providers: like snapshots, but tracking on a provider basis where they last saw things, so that plumbing in full backups is easy. The provider C should only be able to request “give me the encrypted chunks of data since the last checkpoint”. Provider D, which might be your estate lawyer, gets to provide offsite credential storage, with Shamir-Secret-Sharing access to decryption keys so that in the event of your passing, or your house burning down, or whatever, the relevant people can get access to your access keys for decrypting your backups, authenticating to various services, etc. Providers A, B, and E all publish feeds of current PKIX trust anchors.

                                                                                                                                                All package management for software is built around a framework for provable history and authenticated updates, where the framework mostly provides the record of file checksums, and descriptions of vulnerable old versions. Both this tree, and subscribeable services providing data feeds of known vulnerabilities, can use a common language for describing that versions of product X before 2.13 are vulnerable to remote compromise over the network, unauthenticated, and so are too dangerous to run.

                                                                                                                                                The device owner then gets to say which subscription services have authority to do what, perhaps in combination when certain of them agree. Some services might be free, some might be provided at the national level with governments making various threats against people not at least taking the data, some might be from the company’s security team, some might be from your local computer repair shop with whom you have a maintenance contract.

                                                                                                                                                So when the image viewer has vulnerabilities, your data feeds let the system impose a cut-off date; once a system service (“render JPEG”) has a cut-off timestamp, then as long as you track the system ingress timestamp of data, it’s a cheap comparison to refuse to pass that data: you can still see your desktop, as much stuff as possible keeps working, but you lose ability to view new JPEGs until you install the newer version.

                                                                                                                                                The providers intersection the package trees because Mistakes Happen, and those providers help with recovery from it. When two different providers, in their data feeds, are telling your system “vendor A lost their release signing key, this is the new signing key, it’s inline”, that’s data for local policy to evaluate. Some providers will regurgitate crap, with malicious rumors spreading easily. Others will be high-value curation. You probably only trust the equivalent of today’s OS vendors with statements about changes in release signing keys, for instance.

                                                                                                                                                As well as package trees being frameworks, “local data distribution” should be an orthogonal framework. Content-addressable storage (git, etc) and strong checksums, combined with bittorrent, should make it much easier to have data cached and shared on the local network. I should not be retrieving phone/tablet updates 5 times over the wire, when the packages can be pulled once and shared locally, between devices directly. A TV device which is always on can then take part in that. By making this extensible enough, we ensure that stuff like OpenStreetMap data can then just be pulled once, at a particular revision, and still shared. With mobile devices, you then open the opportunity for person A who has good Internet at home to pull all the data for various services to their tablet, and then as they wander around other places they’ve designated to be allowed to pull data, the data can be shared to be freely available to others. The social center, church, youth group, whatever, all have a box on the local wifi which never pulls data over a paid thin Internet pipe, but provide for being local decentralized caches of lots of data, provided that there’s a signed tree over the data proving that this is worth keeping and giving some assurance that they’re not unwittingly hosting toxic data.

                                                                                                                                                Those providers from earlier? One data feed they might provide is “list of data feeds worth tracking, and their signing keys”. So the director of the youth group doesn’t need to know or care about Open Street Map, they just know that all the poor kids are safe exchanging data and they’ve taken all reasonable steps to choose to limit it to be not porn, not toxic, not pirated commercial software.

                                                                                                                                                These providers? Some will be community-based and free; some will be charging $2/year/household for just doing basic services. Some will be corporate-internal (security team, IT team). There are ways here to build sustainable local businesses.

                                                                                                                                                There are more details around identity and access control and how they play off around protection against your every movement being tracked, vs sharing public data freely; these are important details but this comment should be a sufficient overview. I haven’t thought through all of those details but certainly have thoughts on some of them.

                                                                                                                                                Network effects from doing all the above well will help a new OS spread. Details of the OS, such as filesystems supporting plugable point-in-time references for backups, for virus-scanners, for content indexing services (local device search), for automated home NAS duplication (not backups, just current view replication for resiliency), etc all support the higher level goals.

                                                                                                                                                And please, all configuration of the system is going through a config snapshot system, which implicitly is a DVCS commit for changes. Change a checkbox for a system feature? Okay, it might only be a single commit on a branch which will only get rolled up and pushed remotely at the end of the day, but we should always be able to make changes freely and roll forward and backward. Packages installed? Those changes are a commit too. Person clicks “remember this config state”, that’s a tag (contents of the tag are from the GUI’s field, “notes about this state”), it gets pushed to whichever remotes matter. etckeeper is good, but it’s an after-the-fact bandaid when all configuration should be going through this to start with.

                                                                                                                                                Turkey is served, I should stop here.

                                                                                                                                                1. 2

                                                                                                                                                  Datatypes. This was an Amazing Amiga feature. Need to support a file format? Create or install a datatype for it. If your application supports datatypes and it’s relevant, it can now potentially import and export that format. This is how nearly 40 year old machines support WebP and modern formats.

                                                                                                                                                  I’d like to do away with the desktop metaphor. The desktop was a creation of Xerox Parc who aimed everything at doing what a secretary did. We have other visual interfaces for other formats (such as tablets) that are less desktop-y but I’d like to see experiments in interactive interfaces. Specifically I’d love to see the return of knobs, switches and twiddly bits with haptic feedback. I’d like to see some exploration of what a keyboard could be rather than this monoculture of frozen typewriter-esque stuff we see today.

                                                                                                                                                  Finally I’d like to see longevity over planned obsolescence. We build ourselves into traps supporting backwards compatibility. I want to see forward compatibility engineered in. I want to see signs of thought around how an OS version can be used 20, 30, 40, even 100 years from now. I want to see heirloom operating systems - and I don’t mean heirloom Unix, I mean proper usable Operating Systems that can be handed down. I think the first of these is possibly CP/M. No longer developed by the original authors, but still in use. I also think AmigaOS falls into this category, as does Atari TOS (but less so thanks to the wonderful work on MiNT).

                                                                                                                                                  1. 2

                                                                                                                                                    I’d like an OS that can run a container or a VM (running itself or any other OS) as a simple kernel module. Actually, that idea is not new, L4 implements something like that. Of course, if a module crashes, the OS would handle it gracefully.

                                                                                                                                                    I don’t know if it’s feasible (outside academic research), but the idea seems elegant.

                                                                                                                                                    It’d be nice to also provide native primitives towards Rob Pike’s dream as well:

                                                                                                                                                    I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else’s problem, one I’m happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine.

                                                                                                                                                    1. 2

                                                                                                                                                      Some kind of Forth-ish OS with UI, GPU, TCP/IP etc stacks… But with a word that spawns another copy of the OS in a new context (which in the UI environment, could be rendered inside a window), allowing for all kinds of Forth applications to effectively run simultaneously on the same hardware seemingly avoiding issues of threading and vocabulary pollution altogether. The lower level, initial OS should provide a system for executing the higher levels transparently by doing computation and passing the result back up, avoiding any kind of emulation, more of a passthrough. This would be fun and cool in some way.

                                                                                                                                                      1. 1
                                                                                                                                                        • A concept for quick automated post-install setup. Either FS overlay or imperative way. As much as I wished a purely declarative way would do the trick, I don’t think it really does. For some time I have been wondering if something inspired by database migrations would work.
                                                                                                                                                        • Something like dtrace, if you want to see what’s going on.
                                                                                                                                                        • Every userland application using some form of sandboxing, like pledge/unveil (or similar) from ground up.
                                                                                                                                                        • A sane, thought through concept for audio. People spend way too much time hassling around and stuff “sometimes” working in video conferences.
                                                                                                                                                        • A single, obvious way of configuring everything operating system related (eg. not proc, sysctls AND kernel arguments, etc.). The more central and the fewer the better.
                                                                                                                                                        • Maybe a common metrics interface. It feels like this gets re-invented up the step. Now there’s reasons for that to not working the same everywhere, but maybe a good interface could be found.
                                                                                                                                                        • Maybe static linking everywhere
                                                                                                                                                        • A good way to look up documentation (man style?)
                                                                                                                                                        • One dedicated tool for everything, but not five tools for basically the same
                                                                                                                                                        • Standard for common output (something that is nice with awk?)
                                                                                                                                                        • Slim namespacing, so I can easily say “I want to use my browser/application/setup/…, but everything is temporary and gone when I close”. Maybe could be used in a similar way to the pledge-alike?
                                                                                                                                                        • Something solving the same itch as Boot Environments in FreeBSD or Solaris. For ease of mind.
                                                                                                                                                        • An FS hierarchy (doesn’t strictly have to be hierarchical) that allows for “data only” backups (eg. no binaries and assets).
                                                                                                                                                          • This would benefit from some form PIT snapshot feature in the FS, especially with things like databases.
                                                                                                                                                        • Clear separation for software that isn’t managed (by OS/package manger)
                                                                                                                                                        • Working development tools out of the box
                                                                                                                                                        1. 1

                                                                                                                                                          I think OS should be ISC licensed. Must have features like reproducible, rump kernels, preferred language : ada , nim ,

                                                                                                                                                          But most things I like about an OS is already in NetBSD so I want similar features.

                                                                                                                                                          1. 1

                                                                                                                                                            I don’t have a cohesive whole-OS design, but some ideas I’d like to see explored:

                                                                                                                                                            • a unified memory/caching graph, from L1 cache through RAM, disk, and all the way to offline deep-freeze storage
                                                                                                                                                            • a filesystem API that removes the restriction that some names represent bytestreams and some names represent a set of child nodes - give every node a set of children, where one child might be “last modified time” and another might be “resource fork”
                                                                                                                                                            1. 1

                                                                                                                                                              A microkernel.

                                                                                                                                                              1. 1

                                                                                                                                                                Multiplayer by design. Multiple people with multiple keyboards and mice working together in all programs. I am tired of being alone.