1. 21
  1.  

    1. 2

      I really like free firmware, but it’s important to know that modern hardware can require that firmware is signed by a key burned into ROM (or a key further down the chain of trust) to boot. In these systems you can still read, modify, and distribute the firmware, but you can’t run your modifications on your hardware. You lose many of the benefits of FOSS, but it’s better than nothing.

      1. 4

        In that case, I think it’s precisely equivalent to getting nothing. The only thing that having source is that both black hat and white hat hackers have an easier job breaking things.

        You don’t get any benefits unless you’ve getting paid for selling vulnerabilities.

        The only way that source code for firmware is helpful is if I can run a custom build of it.

        1. 2

          Having the source is useful for verifying what the hardware is running (as long as you can reproducibly build it and read it back).

          In theory, a vendor’s signing key could be used to sign malicious firmware with or without their knowledge. Being able to check what’s on there is useful.

          1. 2

            in practice, you usually read the firmware back via requests to the firmware (if you can do it at all); this means that the potentially malicious firmware is the thing that tells you what’s running.

            1. 1

              You’re right, but that’s not always the case. There is often a mechanism for erasing and reuploading the whole mutable firmware region in case of corruption or fault.

              This is usually done directly through the hardware or mask ROM which are much less likely to be compromised.

              I’m sure there are devices out there which don’t provide this access and would rather be replaced on faults, as you say.

              1. 1

                If there’s a way to do that and fully wipe off all the code, what’s verifying the signatures?

                1. 3

                  The mask ROM, meaning firmware baked into the silicon. The project I’m most familiar with has the following boot chain:

                  1. Mask ROM boots within silicon. It validates a region of flash with a key from OTP (one-time-programmable) memory.

                  2. ROM extension boots from flash. It validates a separate region of flash against a key of its own (stored in flash). This key can be changed by the ROM extension for ownership transfer.

                  3. The signed payload boots from flash and does whatever real work the chip needs to do.

                  1. 2

                    From GP:

                    the whole mutable firmware region

                    mask ROM

                    1. 1

                      So how do you verify and update this second layer of firmware that you’re saying now exists? What happens when someone manages to dump it and find vulnerabilities?

                      1. 2

                        What happens when someone manages to dump it and find vulnerabilities?

                        Problems in mask ROM can be a big deal. For that reason it’s often kept small and treated as part of the hardware. Hardware goes through very thorough verification since you (mostly) can’t fix it after tapeout. This will include an independent lab testing it during hardware security certification (I believe).

                        1. 2

                          this second layer of firmware that you’re saying now exists?

                          Oh man, wait til you find out about IME/PSP, you’re gonna be spewing. But yeah, much better that we just don’t know about anything or have any of its code if we can’t run our own builds!

                          1. 2

                            My point is that having the code doesn’t actually help the situation. It’s theater. Keeping it closed source is effectively equivalent. There’s no reason to care that a vendor has opened the code for their firmware, other than curiosity.

                            You can’t even fully prove anything about it, because the lower layers can still lie and be compromised, in theory.

                            So, why bother? With signed firmware, let the vendors keep it closed, it’s a waste of time to care.

                            1. 2

                              Sorry to keep pressing this point but I think it’s interesting:

                              You can’t even fully prove anything about it, because the lower layers can still lie and be compromised, in theory.

                              The lowest layers (meaning immutable hardware / mask ROM) cannot be compromised in the same way as mutable firmware.

                              Mutable firmware can be compromised if a signing key is stolen (or leaked) at any time. The key can be used to sign a specific compromised firmware and target a specific victim. It can be cleaned up and hidden after an attack.

                              Compromising hardware usually requires compromising every instance of the hardware manufactured. This makes it much more likely to be discovered and it cannot be cleaned up later. When it is discovered, it shows intentional bad faith from the manufacturer since multiple engineers would have to be aware of the compromise to get it in. You also have to know well in advance that somebody you’ll want to target is going to use the hardware.

            2. 2

              A good point, because those are frankly the firmwares that I’d want the source code to the most, so they can be verified as safe/correct. That would make the chain of trust far more useful.

              1. 4

                It makes it both easier to break and easier to exploit – and if an exploit is found, you’re still stuck waiting for the vendor to patch.

                Firmware vendors are historically slow to patch, so in practice, this just means you spend more time aware that you’re vulnerable, aware of a fix, and unable to do anything.

                1. 2

                  Security by obscurity never works well. If people start finding more crap in a vendor’s firmware they’re more likely to clean up their act and offer better update mechanisms, or lose sales.

                  …I know, I know, I’m an idealist.

                  1. 2

                    Security by ossification isn’t much better; Offering update mechanisms that let the users actually control the firmware is what actually matters here if you want to improve security.

                    1. 3

                      FWIW you both make good points. Security researchers today do inspect open source firmware projects and they do report problems upstream, and vendors do deploy fixes.

                      I agree it’s important that hardware owners can build and deploy their own firmware and fixes independent of the vendor, though. One reason this might not happen today is that it doesn’t fit with the “secure boot” security model many devices use (and we’re all generally confident in).

                      If I were personally[0] designing a RoT for measuring firmware today, I would use the technique described by @Loup-Vaillant here:

                      https://lobste.rs/s/5awvqj/fixing_tpm_hardware_security_modules

                      [0] meaning speaking only for myself, not on behalf of any project I happen to work on.