1. 11
    1. 5

      The correct capitalization is GNUstep, not GNUStep.

      Does anyone have a transcript? I’m still maintaining the GNUstep Objective-C runtime, but I haven’t used GNUstep for several years. Objective-C isn’t really a language that I’d choose anymore (it makes some trade offs that were good for the 1990s and early 2000s, but both hardware and requirements have shifted since then) and Swift doesn’t seem to be going in a direction that I care about.

      1. 2

        Can you tell us a bit more about Swift? I am curious.

        1. 1

          Are you all in on modern C++ now? What do you think about Rust?

          1. 4

            Modern C++ is a perfectly respectable systems language, where I define a systems language as one that must allow you to break type safety because it’s the thing that you’ll use to implement abstractions for higher-level languages. I don’t see Rust helping me very much here because most of the code that I would normally write in C++ requires things like cyclic data structures that can’t be expressed without unsafe in Rust (sometimes hidden in the standard library or some other third-party code). It’s an interesting experiment though, and it might be that the explicit focusing code review on the dangerous parts leads to better software overall.

            Objective-C is a different animal. It is an application programming language. It isn’t unsafe by design, it’s unsafe because Smalltalk was too slow and because it needs to tightly interoperate with C for performance. The NeXT systems where Objective-C grew up had 8 MiB of RAM. They needed things like pool allocators that opted out of the normal reference counting mechanisms so that they could rapidly reclaim memory during drawing. The whole NSCell infrastructure exists because an NSView instance was over 100 bytes of memory and having one of these per (visible) cell in a table view would have exhausted system memory on a NeXT computer.

            The constraints that I have for modern application programming are very different to the ones I had 20 years ago:

            • I want safe interoperability with existing C/C++ code. A memory-safety bug in my C code shouldn’t impact my application-programming language.
            • I want safe concurrency. This means that I need either an isolation and immutability type system, or multiple single-threaded environments with something like an actor-model abstraction on top.
            • I want good tooling that lets me reason about my code.
            • I want a type system that lets me express high-level abstractions.

            The closest thing that I have to this at the moment is TypeScript, with multiple web workers. It isn’t ideal. We’re working on the first of these as a core language principle in Verona. Verona is targeting the gap between these two extremes, which we’re calling ‘infrastructure programming’: things where you need fine-grained control over allocation policy and object lifetimes, but where you don’t need to implement things like memory allocators or schedulers that would require you to step outside of a safe abstract machine. Verona’s foreign code interop model is being built on top of sandboxing, with abstractions that should let you multiply instantiate C libraries and isolate crashes in different instances of them. That’s less important for application programming (where you want to save your data often and the simplest way of recovering from a C library crash is often to restart the program), but critical for infrastructure programming where you want to ensure that you can keep handling requests from one user when one from another has crashed a component.

            1. 1

              Yeah, I think TypeScript is kind of sweet spots for 10-100kLOC (or 1-10 dev) application projects.

              How would your answer modulate toward a new kernel and related glue? It is interesting there is lot of hype around Rust for new kernels but the results are not spectacular. Meanwhile the dominant commercial OSes, Windows and OS X, use C++ extensively.

              1. 3

                How would your answer modulate toward a new kernel and related glue? It is interesting there is lot of hype around Rust for new kernels but the results are not spectacular. Meanwhile the dominant commercial OSes, Windows and OS X, use C++ extensively.

                I’d probably pick C++ now, but I’d expect that to change in the next 5 years. Most of the new kernel projects that have actually delivered something over the last 20 years have been C++ (various L4 derivatives, Fuchsia / Zircon). There are currently three big advantages for C++ over Rust:

                • Stable language. C++ is evolving but with much greater backwards-compatibility guarantees than Rust.
                • Better tooling for unsafe. Most Rust tooling focuses on the safe subset of the language, because that’s what you should be using most of the time. Kernels are intrinsically required to do unsafe things.
                • Greater ecosystem maturity. C++ has been around for a while now. We find that it’s easier to hire experienced C++ programmers than C programmers for systems roles, and both are easier than finding experienced Rust programmers.

                None of these are really problems with Rust, they’re problems with a young language. Rust will either fail catastrophically (unlikely) or grow out of these limitations.

                All of that said, it’s easy to lump ‘kernel programming’ into one bucket, but it really isn’t. There are a lot of different aspects to a kernel. Some things are latency or throughput sensitive. These typically can be implemented in a type-safe language and so Rust is great here. Some things are core to implementing the abstract machine (e.g. context switch code, virtual memory subsystem) and so need to be written in a systems language. C++ is a better fit here than Rust because you don’t gain any benefit from Rust’s safety guarantees. Some things are entirely control plane. NetBSD has shown that a lot of these can be implemented in interpreted Lua and still be fast enough.

                If I were to write an OS from scratch today, I’d probably pick:

                • C++ and assembly for a nanokernel (in the Symbian EKA2 style), which handled context switching and page-table manipulation.
                • Rust for the microkernel that provided a scheduler and virtual memory abstractions.
                • TypeScript running on something like jsquick or DukTape for a load of control-plane things, isolated from the microkernel.
                • Rust, C++, or other languages for other OS services.

                This is all with the caveat that any OS that I wrote today from scratch probably wouldn’t look much like Windows or Linux. The big gap in the OS space today is for a cloud OS, which would look a lot more like a hybrid of a distributed exokernel and a mainframe OS than a conventional minicomputer OS.

                1. 1

                  I consider “Rust for Linux” project to be a spectacular result. New kernels in Rust haven’t been, but the obvious explanation is that they are new and it is hard for new kernels to make impact.

                  1. 1

                    New kernels in Rust haven’t been

                    You might want to check that. There’s a GitHub account that tracks new kernel projects in Rust. They’re all fairly small now, but there were about a dozen that booted and ran some programs last time I looked (including one with a GUI and a moderately decent Linux ABI compat layer).

        🇬🇧 The UK geoblock is lifted, hopefully permanently.