1. 58
  1.  

  2. 8

    I was lucky enough to see this talk in person. Most of the questions we asked were less “questions” and more “panicked screaming”.

    1. 3

      And rightly so. This is a bad place we’ve found ourselves in.

    2. 5

      Bares repeating that unless we have some link / tie to / from the source to blob (probably meaning reproducible builds, hashes, actual access to the blob / source) we have nothing. But, it’s good to see this being talked about in the open! Need more of this!

      1. 0

        “link / tie to / from the source to blob”

        Yes! Gotta verify the binary does exactly what the source says.

        “reproducible builds, hashes, actual access to the blob / source”

        No… That’s the fads. You need a verified or rigorous compilation first with full traceability to avoid introducing backdoors, security checks removed, leaks added, etc. Especially by optimizations. The kinds of compiler-introduced problems that cause vast majority of compiler-related vulnerabilities. The stuff in that quote ensures everyone might have the same vulnerable code that nobody MITM’d easily but not that binary does what source says. Totally different problems.

        1. 6

          These are different concerns. Reproducible builds allow saying “the binary delivered by X that they claim comes from building tree state Y really comes from Y”. You can even have people sign the same binary independently. It’s a rather low hanging fruit, but even that took some effort to get to https://tests.reproducible-builds.org/coreboot/coreboot.html, and it’s far from industry standard.

          Of course, knowing that Y leads to X (with the toolchain Z) doesn’t mean that there are no bugs in Y or Z (that affect X). That’s a harder problem, and one I’m thinking about a lot. The main problem is that most approaches to enable trust in absence of such bugs have a rather large TCB themselves. We have some bits of SPARK-verifiable code in coreboot, but gnatprove relies on alt-ergo, cvc4, and z3, which are rather huge and opaque, so while it’s an improvement to state assumptions and have them checked, the way that is done invokes lots of “trust me” moments, too.

          1. 1

            These are different concerns.

            Most people pushing it cite the paper on the Thompson attack at some point. That was about subversion. If countering subversion, you need a verified compiler. Almost nobody writing GPL’d software that brings that up uses CompCert to compile it or works on easier-to-understand compilers. One of reasons David A. Wheeler did his diverse, double compilation is that he didn’t think a verified compiler would go mainstream. People had ignored his high-assurance FLOSS page for the most part. Also, SCM security which is where you solve most of the problems people are worrying about. So, he had to get something done that required lower effort knocking out some of the risk, esp MITM.

            You might have different concerns. A different solution might work with those concerns. Most people talking about this topic bring up that Thompson paper at some point. So, I know they’re worried about compiler subversion. They just skip past any solution about the compilers for some reason. Additionally, a secure tunnel between the developers’ repo and user already addresses MITM risk. Run it on OpenBSD or security-focused OS. If they can’t trust developers, they have bigger problems like intentional vulnerabilities in the code. I’m convinced it’s mostly a fad with some anti-MITM and debugging benefits. If not a fad, they’d be ultra-focused on mitigating the main places attacks come from with any free and open tool available. They’re not.

            1. 1

              What’s wrong with diverse double-compilation?

              1. 1

                What’s wrong with diverse double-c

                The low-quality and/or subverted compilers produce the same malicious executable. This is why CompCert is in Wheeler’s page on high-assurance FLOSS. He knew how important verified compilation was. He also knew hardly anyone, even folks with access to verifying compilers, would actually mitigate the compiler risk. So, he came up with another trick that might help with more adoption. It doesn’t help if almost everyone uses one or two compilers whose source are also buggy, though.

      2. 4

        The UEFI Kernel is extremely complex. It has millions of lines of code. UEFI applications are active after boot.

        There’s no kernel to speak of in EDK2 (it does not implement anything like multitasking, it’s more like a bare-metal program, a unikernel if you will :D), and UEFI applications (sorry for being pedantic) cannot be active after boot.

        Runtime Services can be used after boot, key word being used. Used by your OS deliberately because it wants to access the real time clock, set a variable or apply a firmware update. (That’s about all the services usually available.)

        Runtime Services are just code laying around. When your OS is running, they can’t somehow wake up on their own and push the OS away. Runtime Services are not SMM mode ;)

        And the millions of lines thing… who cares. Your favorite kernel also has millions of lines, most of them in drivers for devices you don’t have. Few of the millions of lines are actually used in your boot path.

        1. 4

          Few of the millions of lines are actually used in your boot path.

          Right, the bare minimum was about 120kloc when I stripped down Tianocore as an experiment. The same experiment got me to 20kloc with coreboot (including libpayload and FILO) for the same task: boot a Linux kernel that is loaded from SATA.

          So there are still ~100kloc of extraneous code in the UEFI reference implementation that provide such indispensable features like a not-quite-but-almost COM environment in firmware (OpenProtocol and friends).

          Runtime Services are not SMM mode

          Except that some Runtime Services just call into SMM code. For example that “set a variable” feature nowadays supports Authenticated Variables for UEFI Secure Boot. For those to be secure, the code writing them must be inaccessible from the kernel, and so the strong recommendation (and standard implementation) is to have the Runtime Service call into SMM where the authentication is handled before the variable is written to flash (that is write protected outside SMM)

          1. 1

            boot a Linux kernel that is loaded from SATA

            That’s precisely my problem with most of these solutions like Heads and Linuxboot. I want to boot FreeBSD (often from NVMe). Any boot system that gives special privileges to Linux (e.g. keeping some state via kexec – heck, just using kexec) gets an instant dislike from me.

            some Runtime Services just call into SMM code

            My point was just that Runtime Services cannot wake up by themselves.

            1. 1

              The point of that exercise was to see how low the code count could go, so there was a very specific and minimal target to aim for.

              The main issue with the BSDs tend to be that they rely on either PCBIOS or UEFI services (not just in the bootloader, often in a bunch of platform drivers as well). Given how hideous their implementation is usually, I don’t think that’s a good idea (there is a reason why Linux tends to eschew firmware services as much as it can), and it paints you in a corner when trying to do booting differently.

              The good news about that is that booting BSD with coreboot+SeaBIOS generally works, it just wasn’t part of the assessment.

              1. 1

                No, FreeBSD doesn’t need much from the EFI implementation — it works with U-Boot’s tiny one. As mentioned above, there’s not that many services to use. The real-time clock is supported as a clock source (usually not used since HPET and/or TSC exist), variables only got support recently.

        2. 4

          Nice article. A few comments:

          “Rings 1 & 2 - Device Drivers: drivers for devices, the name pretty much describes itself.”

          STOP and GEMSOS did use four rings. The evaluators griped that UNIX’s didn’t. Microkernel proponents kept pushing mainstream OS’s to move drivers from kernel mode to another mode. I thought the drivers for Nix’s were in kernel mode (Ring 0) with Xen maybe doing something weird with its use of protected mode. Then, some things in user-mode later on like FUSE. Was I wrong?

          “Each of these kernels have their own networking stacks and web servers. The code can also modify itself and persist across power cycles and re-installs. We have very little visibility into what the code in these rings is actually doing”

          Which is why I put money down that the backdoors NSA paid for in direct money and/or defense contracts would be in management systems. That we’d definitely find services with 0-days in there. Sure enough…

          “Linux is already quite vetted and has a lot of eyes on it since it is used quite extensively.”

          That’s total nonsense. Empirical evidence here (pdf) that’s been consistent over long periods of time. If anything, using Linux is guaranteeing you vulnerabilities if they can call anything in the kernel. If a subset or just one function, maybe OK. Careful analysis case by case on that. We’d be better off with something clean-slate for this purpose that can reuse Linux drivers where necessary. Then, we’d check the drivers and the interfaces.

          It is true that it’s better than the closed-source stuff they’re using, has better tooling, folks understand it better, and so on. All true.

          “We need open source firmware for the network interface controller (NIC), solid state drives (SSD), and base management controller (BMC).”

          The problem with this and Intel/AMD internals are that they’re secretive partly to avoid patent suits and new competition. You’re not getting this stuff opened. Not easily at least. Might be better to literally do a closed-source product for them vetted by multiple parties. Otherwise, get the actual specs under NDA to build the open-source code against in a way that doesn’t leak the specs a lot. Alternatively, gotta build your own hardware doing this yourself with whatever the I.P. vendors give you. I mean, good luck on the reverse engineering efforts but these are usually lagging behind.

          “We need to have all open source firmware to have all the visibility into the stack but also to actually verify the state of software on a machine.”

          You actually need open, secure hardware for that since attackers are now hitting hardware. I kept telling people this would happen. Just wait till they do analog and RF more. What she’s actually saying here is “verify the state of the machine if the hardware works and is honest and doesn’t do anything malicious between verifications.”

          “ is the same code running on hardware for all the various places we have firmware. We could then verify that a machine was in a correct state without a doubt of it being vulnerable or with a backdoor.”

          Case in point: I put a secret coprocessor on the machine for “diagnostic purposes,” it can read state of system, it can leak over RF or network, and we leak stuff out of that signed, crypto code. Good thing no major vendors are including hidden or undocumented coprocessors on their chips. ;)

          “Chromebooks are a great example of this, as well as Purism computers. You can ask your providers what they are doing for open source firmware or ensuring hardware security with roots of trust.”

          End with some good advice: buy stuff that’s more open and secure to get more of it. Market demand incentivizing suppliers. That could solve a lot of these problems if enough people do it.

          1. 2

            I’ve seen open source Ethernet in the FPGA community, it’s not cheap but it’s fast and completely open, are there other expensive but open hardware replacements?

            1. 1

              I am very interested in this. Could you post your open source Ethernet link?

              1. 2

                I know Andrew Zonenberg’s starshipraider has ethernet and is fully open source software and hardware.

                It’s not yet usable, but you can find the source here: https://github.com/azonenberg/starshipraider

                Here’s a link to the 100BaseT code https://github.com/azonenberg/scopehal/blob/master/scopeprotocols/Ethernet100BaseTDecoder.cpp?ts=4#L66