1. 44
  1.  

  2. 23

    Kinda late on UNIX bashing bandwagon :)

    Also, Windows owes more of it’s legacy to VMS.

    1. 10

      It does, but users don’t use any of the VMSy goodness in Windows: To them it’s just another shitty UNIX clone, with everything being a file or a program (which is also a file). I think that’s the point.

      Programmers rarely even use the VMSy goodness, especially if they also want their stuff to work on Mac. They treat Windows as a kind of retarded UNIX cousin (which is a shame because the API is better; IOCP et al)

      Sysadmins often struggle with Windows because of all the things underneath that aren’t files.

      Message/Object operating systems are interesting, but for the most part (OS/2, BeOS, QNX) they, for the most part, degraded into this “everything is a file” nonsense…

      Until they got rid of the shared filesystem: iOS finally required messaging for applications to communicate on their own, and while it’s been rocky, it’s starting to paint a picture to the next generation who will finally make an operating system without files.

      1. 10

        If we talk user experiences, it’s more a CP/M clone than anything. Generations later, Windows still smells COMMAND.COM.

        1. 6

          yes, the bowels are VMS, the visible stuff going out is CP/M

          1. 4

            Bowels is a good metaphor. There’s good stuff in Windows, but you’ve got to put on a shoulder length glove and grab a vat of crisco before you can find any of it.

        2. 10

          I think you’re being a little bit harsh. End-users definitely don’t grok the VMSy goodness; I agree. And maybe the majority of developers don’t, either (though I doubt the majority of Linux devs grok journald v. syslogs, really understand how to use /proc, grok Linux namespaces, etc.). But I’ve worked with enough Windows shops to promise you that a reasonable number of Windows developers do get the difference.

          That said, I have a half-finished book from a couple years ago, tentatively called Windows Is Not Linux, which dove into a lot of the, “okay, I know you want to do $x because that’s how you did it on Linux, and doing $x on Windows stinks, so you think Windows stinks, but let me walk you through $y and explain to you why it’s at least as good as the Linux way even though it’s different,” specifically because I got fed up with devs saying Windows was awful when they didn’t get how to use it. Things in that bucket included not remoting in to do syswork (use WMI/WinRM), not doing raw text munging unless you actually have to (COM from VBScript/PowerShell are your friends), adapting to the UAC model v. the sudo model, etc. The Windows way can actually be very nice, but untraining habits is indeed hard.

          1. 6

            I don’t disagree with any of that (except maybe that I’m being harsh), but if you parse what I’m saying as “Windows is awful” then it’s because my indelicate tone has been read into instead of my words.

            The point of the article is that those differences are superficial, and mean so very little to the mental model of use and implementation as to make no difference: IOCP is just threads and epoll, and epoll is just IOCP and fifos. Yes, IOCP is better, but I desperately want to see something new in how I use an operating system.

            I’ve been doing things roughly the same way for nearly four decades, despite the fact that I’ve done Microsoft/IBM for a decade, Linux since Slackware 1.1 (Unix since tapes of SCO), Common Lisp (of all things) for a decade, and OSX for nearly that long. They’re all the same, and that point is painfully clear to anyone who has actually used these things at a high level: I edit files, I copy files, I run programs. Huzzah.

            But: It’s also obvious to me who has gone into the bowels of these systems as well: I wrote winback which was for a long time the only tools for doing online Windows backups of standalone exchange servers and domain controllers; I’m the author of (perhaps) the fastest Linux webserver; I wrote ml a Linux emulator for OSX; I worked on ECL adding principally CL exceptions to streams and the Slime implementation. And so on.

            So: I understand what you mean when you say Windows is not Linux, but I also understand what the author means when he says they’re the same.

            1. 2

              That actually makes a ton of sense. Can I ask what would qualify as meaningfully different for you? Oberon, maybe? Or a version of Windows where WinRT was front-and-center from the kernel level upwards?

              1. 2

                I didn’t use the term “meaningfully different”, so I might be interpreting your question you too broadly.

                When I used VMS, I never “made a backup” before I changed a file. That’s really quite powerful.

                The Canon Cat had “pages” you would scroll through. Like other forth environments, if you named any of your blocks/documents it was so you could search [leap] for them, not because you had hierarchy.

                I also think containers are very interesting. The encapsulation of the application seems to massively change the way we use them. Like the iOS example, they don’t seem to need “files” since the files live inside the container/app. This poses some risk for data portability. There are other problems.

                I never used Oberon or WinRT enough to feel as comfortable commenting about them as I do about some of these other examples.

            2. 2

              If it’s any motivation I would love to read this book.

              Do you know of any books or posts I could read in the meantime? I’m very open to the idea that Windows is nice if you know which tools and mental models to use, but kind of by definition I’m not sure what to Google to find them :)

              1. 4

                I’ve just been hesitant because I worked in management for two years after I started the book (meaning my information atrophied), and now I don’t work with Windows very much. So, unfortunately, I don’t immediately have a great suggestion for you. Yeah, you could read Windows Internals 6, which is what I did when I was working on the book, but that’s 2000+ pages, and most of it honestly isn’t relevant for a normal developer.

                That said, if you’ve got specific questions, I’d love to hear them. Maybe there’s a tl;dr blog post hiding in them, where I could salvage some of my work without completing the entire book.

              2. 1

                I, for one, would pay for your “Windows is not Linux” book. I’ve been developing for Windows for about 15 years, but I’m sure there are still things I could learn from such a book.

              3. 7

                but users don’t use any of the VMSy goodness in Windows: To them it’s just another shitty UNIX clone, with everything being a file or a program (which is also a file). I think that’s the point.

                Most users don’t know anything about UNIX and can’t use it. On the UI side, pre-NT Windows was a Mac knockoff mixed with MSDOS which was based on a DOS they got from a third party. Microsoft even developed software for Apple in that time. Microsoft’s own users had previously learned MSDOS menu and some commands. Then, they had a nifty UI like Apple’s running on MSDOS. Then, Microsoft worked with IBM to make a new OS/2 with its philosophy. Then, Microsoft acquired OpenVMS team, made new kernel, and a new GUI w/ wizard-based configuration of services vs command line, text, and pipes like in UNIX.

                So, historically, internally, layperson-facing, and administration, Windows is a totally different thing than UNIX. Hence, the difficulty moving Windows users to UNIX when it’s a terminal OS with X Windows vs some Windows-style stuff like Gnome or KDE.

                You’re also overstating the everything is a file by conflating OS’s that store programs or something in files vs those like UNIX or Plan 9 that use file metaphor for about everything. It’s a false equivalence: from what I remember, you don’t get your running processes in Windows by reading the filesystem since they don’t use that metaphor or API. It’s object based with API calls specific to different categories. Different philosophy.

                1. 3

                  Bitsavers has some internal emails from DEC at the time of David Cutler’s departure.

                  I have linked to a few of them.

                  David Cutler’s team at DECwest was working on Mica (an operating system) for PRISM (a RISC CPU architecture). PRISM was canceled in June of 1988. Cutler resigned in August of 1988 and 8 other DECwest alumni followed him at Microsoft.

              4. 5

                I have my paper copy of The Unix Hater’s Handbook always close at hand (although I’m missing the barf bag, sad to say).

                1. 5

                  I always wanted to ask the author of The Unix Hater’s Handbook if he’s using Mac OS X

                  8~)

                  1. 6

                    It was edited by Simson Garfinkel, who co-wrote Building Cocoa Applications: a step-by-step guide. Which was sort of a “port” of Nextstep Programming Step One: object-oriented applications

                    Or, in other words, “yes” :)

                    1. 2

                      Add me to the list curious about what they ended up using. The hoaxers behind UNIX admitted they’ve been coding in Pascal on Macs. Maybe it’s what the rest were using if not Common LISP on Macs.

                  2. 7

                    Beat me to it. Author is full of it right when saying Windows is built on UNIX. Microsoft stealing, cloning, and improving OpenVMS into Windows NT is described here. This makes the Linux zealots’ parodies about a VMS desktop funnier given one destroyed Linux in desktop market. So, we have VMS and UNIX family trees going in parallel with the UNIX tree having more branches.

                    1. 4

                      The author doesn’t say Windows is built on Unix.

                      1. 5

                        “we are forced to choose from: Windows, Apple, Other (which I shall refer to as “Linux” despite it technically being more specific). All of these are built around the same foundational concepts, those of Unix.”

                        Says it’s built on the foundational concepts of UNIX. It’s built on a combo of DOS, OS/2, OpenVMS, and Microsoft concepts they called the NT kernel. The only thing UNIX-like was the networking stack they got from Spider Systems. They’ve since rewritten their networking stack from what I heard.

                        1. 4

                          Says it’s built on the foundational concepts of UNIX.

                          I don’t see any reason to disagree with that.

                          The only thing UNIX-like …

                          I don’t think that’s a helpful definition of “unix-like”.

                          It’s got files. Everything is a file. Windows might even be a better UNIX than Linux (since UNC)

                          Cutler might not have liked UNIX very much, but Windows NT ended up UNIX anyway because none of that VMS-goodness (Versions, types, streams, clusters) ended up in the hands of Users.

                          1. 10

                            It’s got files. Everything is a file.

                            Windows is object-based. It does have files which are another object. The files come from MULTICS which UNIX also copied in some ways. Even the name was a play on it: UNICS. I think Titan invented the access permissions. The internal model with its subsystems were more like microkernel designs running OS emulators as processes. They did their own thing for most of the rest with the Win32 API and registry. Again, not quite how a UNIX programming guide teaches you to do things. They got clustering later, too, with them and Oracle using the distributed, lock approach from OpenVMS.

                            Windows and UNIX are very different in approach to architecture. They’re different in how developer is expected to build individual apps and compose them. It wasn’t even developed on UNIX: they used OS/2 workstations for that. There’s no reason to say Windows is ground in the UNIX philosophy. It’s a lie.

                            “Windows NT ended up UNIX anyway because none of that VMS-goodness (Versions, types, streams, clusters) ended up in the hands of Users.”

                            I don’t know what you’re saying here. Neither VMS nor Windows teams intended to do anything for UNIX users. They took their own path except for networking for obvious reasons. UNIX users actively resisted Microsoft tech, too. Especially BSD and Linux users that often hated them. They’d reflexively do the opposite of Microsoft except when making knockoffs of their key products like Office to get desktop users.

                            1. 3

                              Windows is object-based.

                              Consider what methods of that “object” a program like Microsoft Word must be calling besides “ReadFile” and “WriteFile”.

                              That the kernel supports more methods is completely pointless. Users don’t interact with it. Programmers avoid it. Sysadmins don’t understand it and get it wrong.

                              I don’t know what you’re saying here.

                              That is clear, and yet you’re insisting I’m wrong.

                              1. 3

                                Except, that’s completely wrong.

                                I just started Word and dumped a summary of its open handles by object type:

                                C:\WINDOWS\system32>handle -s -p WinWord.exe
                                
                                Nthandle v4.11 - Handle viewer
                                Copyright (C) 1997-2017 Mark Russinovich
                                Sysinternals - www.sysinternals.com
                                
                                Handle type summary:
                                  ALPC Port       : 33
                                  Desktop         : 1
                                  Directory       : 3
                                  DxgkSharedResource: 2
                                  DxgkSharedSyncObject: 1
                                  EtwRegistration : 324
                                  Event           : 431
                                  File            : 75
                                  IoCompletion    : 66
                                  IoCompletionReserve: 1
                                  IRTimer         : 8
                                  Key             : 171
                                  KeyedEvent      : 24
                                  Mutant          : 32
                                  Process         : 2
                                  Section         : 67
                                  Semaphore       : 108
                                  Thread          : 138
                                  Timer           : 7
                                  Token           : 3
                                  TpWorkerFactory : 4
                                  WaitCompletionPacket: 36
                                  WindowStation   : 2
                                Total handles: 1539
                                

                                Each of these types is a distinct kernel object with its own characteristics and semantics. And yes, you do create and interact with them from user-space. Some of those will be abstracted by lower-level APIs, but many are directly created and managed by the application. You’ll note the number of open “files” is a very small minority of the total number of open handles.

                                Simple examples of non-file object types commonly manipulated from user-land include Mutants (CreateMutex) and Semaphores (CreateSemaphore). Perhaps the most prominent example is manipulating the Windows Registry; this entails opening “Key” objects, which per above are entirely distinct from regular files. See the MSDN Registry Functions reference.

                                1. 0

                                  None of these objects can exist on a disk; they cannot persist beyond shutdown, and do not have any representation beyond their instantaneous in-memory instance. When someone wants an “EtwRegistration” they’re creating it again and again.

                                  Did you even read the article? Or are you trolling?

                                  1. 3

                                    None of these objects can exist on a disk; they cannot persist beyond shutdown, and do not have any representation beyond their instantaneous in-memory instance. When someone wants an “EtwRegistration” they’re creating it again and again.

                                    Key objects do typically exist on disk. Albeit, the underlying datastore for the Registry is a series of files, but you never directly manipulate those files. In the same sense you may ask for C:\whatever.txt, you may ask for HKLM:\whatever. We need to somehow isolate the different persisted data streams, and that isolation mechanism is a file. That doesn’t mean you have to directly manipulate those files if the operating system provides higher-level abstractions. What exactly are you after?

                                    From the article:

                                    But in Unix land, this is a taboo. Binary files are opaque, say the Unix ideologues. They are hard to read and write. Instead, we use Text Files, for it is surely the path of true righteousness we have taken.

                                    The Windows Registry, which is a core part of the operating system, is completely counter to this. It’s a bunch of large binary files, precisely because Microsoft recognised storing all that configuration data in plain text files would be completely impractical. So you don’t open a text file and write to it, you open a Registry key, and store data in it using one of many predefined data types (REG_DWORD, etc…).

                                    Did you even read the article? Or are you trolling?

                                    It sounds like you’re not interested in a constructive and respectful dialogue. If you are, you should work on your approach.

                                    1. -3

                                      What exactly are you after?

                                      Just go read the article.

                                      It’s about whether basing our entire interactions with a computer on a specific reduction of verbs (read and write) is really exploring what the operating system can do for us.

                                      That is a very interesting subject to me.

                                      Some idiot took party to the idea that Windows basically “built on Unix” then back-pedalled it to be about whether it was based on the same “foundational” concepts, then chooses to narrowly and uniquely interpret “foundational” in a very different way than the article.

                                      Yes, windows has domains and registries and lots of directory services, but they all have the exact same “file” semantics.

                                      But now you’re responding to this strange interpretation of “foundational” because you didn’t read the article either. Or you’re a troll. I’m not sure which yet.

                                      Read the article. It’s not well written but it’s a very interesting idea.

                                      Each of these types is a distinct kernel object with its own characteristics and semantics

                                      Why do you bring this up in response to whether Windows is basically the same as Unix? Unix has lots of different kernel “types” all backed by “handles”. Some operations and semantics are shared by handles of different types, but some are distinct.

                                      I don’t understand why you think this is important at all.

                                      It sounds like you’re not interested in a constructive and respectful dialogue. If you are, you should work on your approach.

                                      Do you often jump into the middle of a conversation with “Except, that’s completely wrong?”

                                      Or are you only an asshole on the Internet?

                                      1. 4

                                        Or are you only an asshole on the Internet?

                                        I’m not in the habit of calling people “asshole” anywhere, Internet or otherwise. You’d honestly be more persuasive if you just made your points without the nasty attacks. I’ll leave it at that.

                              2. 2

                                networking for obvious reasons

                                Them being what? Is the BSD socket API really the ultimate networking abstraction?

                                1. 7

                                  The TCP/IP protocols were part of a UNIX. AT&T gave UNIX away for free. They spread together with early applications being built on UNIX. Anyone reusing the protocols or code will inherit some of what UNIX folks were doing. They were also the most mature networking stacks for that reason. It’s why re-using BSD stacks was popular among proprietary vendors. On top of the licensing.

                                  Edit: Tried to Google you a source talking about this. I found one that mentions it.

                    2. 20

                      I was in the middle of a long-winded comment, but I don’t have the time to point out all the problems. Quite simply, the article doesn’t make any sense. There is a term for this: it’s not even wrong.

                      In order to be “wrong”, arguments have to be based in some sort of reality, but just reach the incorrect conclusion. None of the author’s arguments make any sense, so the article cannot be judged as being “right” or “wrong”, because it’s not based in reality.

                      1. 7

                        I somewhat agree with this but maybe not for the same reasons. Many of the points ignore obvious reasons that are still valid today. Such as:

                        Regarding print statements:

                        Oh, wait a minute, this is still the only way we can get anything done in the world of programming!

                        But what about debuggers? Tcpdump-like tools? These are much better than debugging via printing.

                        Our ancestors bequeathed to us cd, ps, mkdir, chown, grep, awk, fsck, nroff, malloc, stdio, creat, ioctl, errno, EBADF, ESRCH, ENXIO, SIGSEGV, et al.

                        What technical field is not besieged by jargon with meaning not obvious to the unfamiliar? Likewise, all of these can be alised to more obvious terms, anything you desire, for only the cost of character space in your code or configs. So why don’t we see more of this? The author suggests it it because keyboards were physically painful to type on in the past.

                        You know what hurts? RSI. Silly point aside, the time savings are absolutely still relevant today.

                        The big problem with text-as-ASCII (or Unicode, or whatever) is that it only has structure for the humans who read it, not the computer.

                        Is this really a problem? Haven’t we solved inefficiencies relating to this with various types of preprocessing such as, well, code preprocessing? Not to mention compilation? Code aside, is the minimal overhead of parsing a config file when your daemon happens to restart that big? Maybe I’m being pedantic but aren’t scripting languages generally “slow” to execute yet quick to write, and the inverse for binary programs?

                        I feel that the author ignores obvious reasoning (obvious in that the alternative is, while not sugar-coated, certainly more modern than 70s Unix. This seems to fit every point in this article and, at the risk of ad hominem, paints a picture of a naive computer enthusiast.

                      2. 11

                        Recall what happened: we decided to represent characters of text, including punctuation and whitespace, using fixed numbers (a practical, though rather dubious, decision; again, a product of early computing)

                        Around here is where he lost me, how else are we supposed to do it? As he admits, computers just manipulate bits, if you want to represent anything else you’ll need a mapping from those bits to your character set.

                        1. 2

                          There’s no term for it, but the alternative is structured data: a “binary” AST with data at the leafs, and if that data is text it has to be with explicit mentioning of the character table.

                          1. 1

                            Everything just becomes a single node containing a blob of text again.

                            Messaging actually shows promise, as does networking, but I think the former is more likely than the latter (despite much more energy being put into the latter by researchers)

                            1. 2

                              Why would everything become a single node with a blob of text? This is evidently true when it would have to work with systems that were primarily made to deal with unstructured text files, but that’s the circular argument addressed in the article.

                              Messaging and networking are different solutions for problems not mentioned in the article. Not sure how it would help with the “build and destroy the world” issue.

                              1. 6

                                We get blobs of text because of serialisation, something we need to do to stream out to spinning platters made of rust, or to beam waves over a wire.

                                We often prefer to choose the serialisation format because it’s always faster than a general purpose serialisation format one might try for baking down that “AST”: Remember, even if we have no other opinion we can choose from S-expressions and XML and JSON and ASN.1 and PkZip and so on. Each with different disadvantages.

                                And once you serialise, you might as well freeze them someplace. Maybe a hierarchy. This thing is called a file system, and those frozen blobs are called files.

                                Messaging and networking are a way to build a platform that doesn’t have a filesystem of files. They aren’t mentioned in the article, but then: no solutions are really offered by the article.

                                1. 1

                                  Messaging and networking are a way to build a platform that doesn’t have a filesystem of files. They aren’t mentioned in the article, but then: no solutions are really offered by the article.

                                  This is interesting, what do you mean by using messaging as a substitute for a filesystem, what would that look like?

                                  1. 2

                                    iOS does something like this (awkwardly; through a blessed but ad hoc mechanism). You send a message to another app- and ask it send a message back to you.

                                    One obvious use is storing things that we used to store in files, like photos and preference and music, but we can also use it to authenticate (who are you), to authorise (do you allow this), to purchase, and perhaps for other things.

                                    Urbit is exploring some of these themes in a much more grand scale, but is so much less a “complete” operating environment at this point to teach us what computing will be like in this way.

                                    HDCP/HDMI has another (limited) use of this where you can play a (protected) video at an x/y/w/h without revealing the bits.

                                    The Mother of All Demos hinted at some of this with their collaborative single user super computer.

                                    And so on.

                        2. 11

                          This is nonsense there are basically two popular OS families (Windows and UNIX-like) and both are able to run software for the other (wine and WSL).

                          Neither is macOS/iOS written in Objective-C as claimed, its written mostly in C with a little asm (like all UNIX-like systems) with a driver subsystem ioKIT written in C++.

                          Just because something dates from the 70s and is “old” doesn’t mean its “obsolete”. It can also be basically “done right”. I don’t think there has been much progress in OS design recently, although things like capabilities and micro-kernels are interesting and there have been developments with new file-systems.

                          1. 2

                            Good point about main two having compatibility layers. Far as OS design, there’s always progress happening in CompSci and FOSS. There’s just rarely any attempt to port whatever they come up with into mainstream OS’s or make it a mainstream OS. It can take a lot of work to do that. GenodeOS and MirageOS are possibly only ones I’ve seen with really-different architecture that got deployed, developed a serious community, and are still developed.

                          2. 10

                            It turns out that vaporware is so much better than any actual program! But this reminds me of Olin Shivers brilliant contribution to the UNIX-haters list.

                            1. 8

                              This one stood out.

                              Even though, if we really wanted, we could represent ASTs as text, this does not seem to have caught on or even be considered.

                              Oh really? How about Lisp?

                              I do like the complaint about the little languages problem a couple paragraphs later. The same complaint was made in the scsh paper more than 20 years ago, and I don’t think things have improved since then. Unix does have way too many idiosyncratic little languages.

                              1. 3

                                Specifically, this one which I think UNIX OS’s still can’t beat on all features. Even more detail given by this unbiased journalist. Namely, the integration and live coding/fixing/upgrading of the entire system from OS to apps in one IDE. Also, pausing an app with the problematic source code loaded up in IDE vs crashing with one using low-level debuggers. Also, the safety and productivity benefits of LISP in general at the system level.

                              2. 7

                                The tone of the article makes it read a bit like a rant but reading between the lines, it seems that the author points at reworking the core concepts that are a foundation of widely used OSes. Some of the examples mentioned like print or the teletype are spot-on: we can (and probably should) move past these metaphors from an other era. But then there are some core principles that still work well (everything is a file, text over binary, compostable commands). I’d be curious to see what concepts/abstractions would be ideal for a present day OS designed from the ground up.

                                1. 6

                                  I changed my mind about how old UNIX is after reading The Art of Unix Programming by Eric Raymond. I don’t want structured binary file formats. Strings, please.

                                  1. 6

                                    I agree with the author that many of the software abstractions all common operating systems use are decades-old relics of a time when designers made decision based around hardware constraints that are for the most part no longer relevant, and it would be good to re-think the relevance of some of these abstractions.

                                    The devil’s in the details though. What specific abstraction should replace plain text? If I call my plaintext file “config.json” or and check that it is valid JSON, does that effectively solve the problem I wanted to solve by making another abstraction ubiquitous (is parsing text really all that hard on modern hardware?)

                                    Are the specific terse conventions of UNIX shell commands (ls, cp, etc.) so bad that it’s worth the transition cost? After all, nothing is stopping anyone from writing their own shell where you type whatever you want to do those things, that really is just a product of human inertia - just like the English spelling system, one might point out.

                                    Is “But the division of work into processes seems very coarse-grained; one widely replicated practice is having each end-user application, on average, correspond to a process” still true anymore, even on desktop computers? Of course the UNIX process abstraction still exists, but plenty of software makes use of multiple threads of execution (IIRC the basic unit of a thread of execution in Linux is the task, which can be configured share arbitrary resources with other tasks, and if you have one or more tasks that happen to share a memory address space you call that a process). And this is ignoring fairly-common computing environments like, say, offloading computation to GPUs to play a video game or do some hardcore math.

                                    I don’t want to seem overly critical here. I don’t think that the Linux process abstraction is necessarily the best possible abstraction for running a computer program. Maybe there’s a better abstraction out there that would make multithreading and talking to the GPU much easier, and computing as a whole is losing out because most mainstream software developers have to shoehorn these tasks into an inferior abstraction. But this is below the level of abstraction that I personally normally work in as a programmer, so it’s hard for me to judge whether or not this is actually true without a counterexample of a potentially-better abstraction, which the author doesn’t provide.

                                    The Urbit project also seems relevant to this discussion. They are trying to build a clean-slate computing environment (right now implemented as an ordinary UNIX process however) that has fundamentally different abstractions from UNIX, created for the purpose of making it easy for people to host personal servers. It uses a custom functional programming language (interpreted by a very minimalist interpreter running on actual hardware) as the OS implementation language/application software language, rather than C; it builds a network namespace into the design of the system at a much more fundamental level than any kind of UNIX networking, and has a model of running programs that addresses a lot of the points made in section 5 of this article.

                                    1. 1

                                      Urbit is interesting in many of it’s choices. But it is also a long way from stable and polished. I say this as someone who has spent a lot of time getting to understand it and also possesses a galaxy on the network.

                                      It will be interesting to see if they can stabilize it and make it more ergonomic.

                                      1. 1

                                        Urbit is interesting in many of it’s choices. But it is also a long way from stable and polished. I say this as someone who has spent a lot of time getting to understand it and also possesses a galaxy on the network.

                                        Oh, absolutely, it’s still very much unstable software. I bring it up because it’s an example of a recent OS that actually is substantially different from UNIX/Windows and uses different abstractions and conceptual metaphors for its computing, that you can use (to some limited extent) right now today.

                                    2. 5

                                      Funny to mention nested functions. GCC does allow them, and people passionately hate their existence because they require making the stack executable which is highly detrimental for security.

                                      1. 1

                                        Doesn’t this mention of feature vs security loop back to the author’s original point?

                                      2. 5

                                        Too bad TempleOS is so weird it doesn’t even count, but it’s something genuinely interesting and different.

                                        1. 3

                                          Yes! Thanks for bringing that up. I don’t think I noticed that.

                                          HolyC is genuinely interesting because the bits have colour and the files are three dimensional. I think Racket has experimented with this, but HolyC gives us a taste of what the world looks like when the entire operating system is “in on it”

                                        2. 5

                                          At the risk of sounding like a “Unix ideologues” (which I am not), I do think that one has to point out how limited the authors understanding of strengths of *nix seem to be.

                                          “A stream of bytes”? Really? What sort of a computing abstraction is “everything is a stream of bytes”??

                                          One, as most of the time, has two other options: Either a higher or a lower abstraction: in this case, either bits or higher level concepts, like lines (as string), lists or other conceptional data structures. Bytes, as can be seen have turned out to be be best compromise. A bit has too little information (simply one bit) while other concepts either turn out to be limiting through complexity or too inefficient – the latter has always been a problem with Lisp/Emacs.

                                          And as the author points out, bytes are just a more “optimzed” version of bits, since it contains more information per unit. Information in turn, can be understood as the “pure potentiality” of computers. The capacity to store, interpret, re-use and manipulate information (or “data” if one perfers that term) is the raison raison d’être of these devices. And here one can see, that if the “base unit” is either too simple or too cAnd it is this ability toomplex, and especially it it’s incapable of being composed and mapped onto more complex structures, the more a system is inhibited.

                                          The only other useful aspect ought to be made explicit in the phrase: “Everything is a named stream of bits”. Not only do we have bitstreams, but instead of being identified by numbers (well, except sometimes in the shell…) we can use (comparatively) human-readable names!

                                          The horror! Identifying bitstreams with strings??

                                          This is just plain reductive, since it simply ignores filesystems. AFAIK, one of the major benifits of UNIX was the way it handled files within it’s filesystem. A simple touch file.name creates a new file – echo content >> file.name appends data. I’m currently listening to lectures on computer architecture and system programming, and reflecting on how simple these things turn out to be in practice, is a miracle in a sense.

                                          And then again, “filesystems” by themselves offer a structure the author seems to be missing. Behind every “string” (or what I would rather refer to as a symbol) is some stream of bytes. This can be an actual file, a named pipe, a system link, a directory, etc. But all this implementational complexity is delegated into common tools, as the kernel and the shell. The end product is that tools have the ability to operate with a reduced set of system calls, if they want to. This in turn simplifies and generalizes tools, thus making them “composable”!

                                          And of course, this isn’t perfect. But the fact that, among people who can choose, these do end up choosing this mode of operation, says to be that it might not be “obsolete for decades”. One can think about it this way: the more limited a system is, the more “dirty hacks” it requires to be kept alive, when it has to face changes.

                                          These will either come from users, vendors or programmers. Imagine, for example that one had to write do infront of every command in a shell. The author claims

                                          programming is full of arcane abbreviations and two-letter shell commands

                                          yet I would bet, that the existance of such a shell, would imediatly provoke the creation of a “do-shell” shell, ie something we’ve got today. And due to it’s open and flexible nature, it’s user’s were able to extends their abilities in a ogranic way (ed -> grep -> sed -> awk -> …).

                                          And it is this the whole sum of sane and conveniant features, not just some particular or a set of particular abilities, that make *nix valuable, and why it should be preserved!

                                          1. 2

                                            I don’t get the impression the author doesn’t understand unix. I think they are trying to communicate a strange (and interesting) idea, and are using this “let’s take a journey” communication style that makes it easy to get lost on the way to the point.

                                            It’s too bad, because there’s a lot of replies on the journey in the comments here instead of an attempt to understand and respond to the point.

                                            1. -1

                                              I don’t think that this is a issue of a lack of understanding unix by itself, but rather misunderstanding the proper usage, ie. how to use it with least resistance. Maybe.

                                              But in the end, I do believe that this point of view could be better communicated.

                                          2. 3

                                            I feel like most of these criticisms are superfluous and don’t really matter. Why does it matter that we call writing printing? etc

                                              1. 1

                                                The horror! Identifying bitstreams with strings?? But won’t that be terribly inefficient, for such a common operation? You know what, I’m actually glad this is a cornerstone of the Unix philosophy, otherwise it might have been optimised away.

                                                – guess what, files are actually identified by inode numbers, it’s just that the virtual file system hides this fact.

                                                1. 1

                                                  This is a rather… uh… factually-challenged ideological piece, and isn’t very interesting because of that.