1. 43

  2. 8

    I don’t get this hype about seL4. All I see are it’s claims about it’s security and speed, but I can’t find anything about it’s usability. The communication on it’s page aggressively attacks other operating systems (e.g. “If someone else tells you they can make such guarantees, ask them to make them in public so Gernot can call out their bullshit” in the FAQ). The performance page doesn’t have any comparisons to other OS’s, yet FAQ claims that it is the fastest in the metric presented there. In general, the few times I’ve seen somebody bring up seL4, the proponents were very aggressive against other OS’s. Doesn’t really look well, does it?

    1. 17

      The rhetoric from the seL4 cheerleaders can indeed be cringeworthy at times. That being said, the L4 family is an interesting look into how you can start with a really minimal set of OS features and get to something useful, and seL4 is one of a very few OS kernels to be subject to rigorous formal verification. How much you value that probably tracks very closely to how much you value formal verification in general.

      It isn’t particularly useful to compare seL4 to a general-purpose OS like Linux or Windows since they’re intended for such different use cases. seL4 might be a useful building block for, say, a hardened security appliance that handles signing keys or runtime monitoring on behalf of some other general-purpose OS, or a high-value industrial control system (power plants, medical devices, voting machines, etc.)

      The focus on performance is AFAICT aimed mainly at the historical critique of microkernels as painfully slow for real-world workloads. That in turn largely stems from the behavior of poorly-optimized Mach-backed syscalls on commodity PCs when they were being put up against monolithic designs back in the 90s. (Mac OS still seems to carry some of this debt, as Xnu is a pretty direct descendant of Mach.)

      1. 3

        Is there a blog post about this? I want to know more!

        1. 4

          It’s not just Mach, it was also Windows NT, Minix, and others. It took the L3 and L4 family of kernels a long time to get this nailed down. Just dig around wikipedia for Microkernels and this paper for the history of L4.

      2. 4

        It outperformed other microkernels. It would probably host an OS like Linux in a VM alongside bare-metal or runtume-supported components. A secure middleware lets the pieces communicate. The architecture is often called Multiple Independent Levels of Security with microkernels doing it called “separation” kernels. Overall performance depends on the overheads of context switching and message passing which lead to tradeoffs in how many pieces you break system into.

        This pdf on a similar system (Nizza) shows how building blocks like seL4 were to be used. INTEGRITY-178B was the first built, certified, and deployed in Dell Optiplex’s. The certification data at bottom-right shows what was required but watch out for their marketing ;). LynxSecure is used by Navy. Due to funding, complexity, and OK Labs focus on mobile, the seL4 team switched focus to embedded like military and IoT.

        @Shapr, tagging you in since the Nizza paper might help you out.

        1. 1

          I thought version of L4 hosting Linux outperform Linux?

          1. 4

            It did. The benchmark might be meaningless, though. A real system would extract more and more of the Linux TCB into isolated partitions. There would be more message passing. It could also cause more accidental cache flushes on top of clearing registers and caches that already happens in separation kernels upon security context switch. We don’t know what the performance hit would be.

            An example would be a web server where the kernel, ethernet, networking, filesystem, firewall, TLS, and user-facing server are all in separate partitions. Things that are mostly jumps in memory of one process become IPC across them. That could add up.

      3. 6

        “With an open-source implementation, you see what you get”

        Just wanted to note this is not true at all for hardware. The synthesis tools, usually two in combination, convert the high-level form into low-level pieces that actually run. They’re kind of like Legos for logic. Like with a compiler, they might transform them a lot to optimize. They use standard cells that are usually secret. Then, there’s analog and RF functionality that might have errors or subversions with fewer experts that know anything about it. Finally, there’s the supply chain from masks to fab to packaging to you.

        With hardware, you have no idea what you actually got unless you tear it down. If it’s deep sub-micron, you have to one or more other companies during the tear-down process. This excludes the possibility that they can make components look like other components in a tear-down. Idk if that’s possible but figure I should mention it.

        When I looked at that problem, my solution was that the core or at least a checker/monitor had to be at 350nm or above so a random sample could be torn up for visual inspection. The core would be designed like VAMP with strong verification. Then, synthesis (eg Baranov’s) to lower-level form with verified transforms followed by equivalence checks (formal and/or testing). The cells, analog, and RF would be verified by mutually-suspicious experts. Then, there were some methods that can profile analog/RF effects of onboard hardware to tell if they swap it out at some point. Anyway, this is the start with open (or vetted + NDA) cells, analog, and RF showing up overtime, too. Some already are.

        1. 7
          1. 2

            I’m not a big fan of making critiques based on stuff that is explicitly outside of their security model. From my understanding, the formal verification of side channel for RISC-V would catch Spectre-style attacks: researchers implemented Spectre-like vulnerabilities into RISC-V designs which still conformed to the specification.

            Yes, you can backdoor compilers, microcode, and hardware. But that’s not far from the generic critique of formal methods based on Godel’s incompleteness theorem. seL4 is the only operating system that makes it worth our time to finally start hardening the supply chain against those types of attacks.

            1. 3

              I normally agree. However, they were pushing seL4 on ARM as a secure solution. You cant secure things on ARM offerings currently on market. So, it’s a false claim. The honest one is it gives isolation except for hardware attacks and/or faults. For many, that immediately precludes using it. I’d rather them advertise honestly.

              A side effect is that it might increase demand in secure hardware.

          2. 2

            Should be tagged formalmethods.

            1. 3

              Feel free to suggest that tag!