1. 32
  1. 15

    Thought experiment

    What if this signed firmware was not stored in P/ROM on the chip, but was instead equivalently implemented in actual logic gates (as the rest of the chip is).

    Does that make things better (no firmware blob), do nothing at all (equivalent problems), or make things worse (can’t be replaced ever)?

    Now think the other way. What if we take some chip functionality that is currently implemented as logic gates, but turn it into signed firmware that other parts of the chip interpret. Is that better, the same, or worse?

    Big chips (modern MCUs & processors) have a lot of logic that is just as secret (or more so) as firmware.

    Context: I’m an electronics enthusiast. I always find it interesting to see a focus on non-free firmware, perhaps because it’s an easier target. It’s cheaper to implement a lot of things as rewritable firmware because it’s easier to fix if you make mistakes, but the exact same results can always be had without needing something called firmware. This tradeoff is considered by the chip designers for a lot of parts and features.

    1. 7

      I think people largely object to signed firmware on ‘method’ grounds (it would be cheaper to build without requiring signing; the hardware was made more expensive in order to make it harder to use).

      1. 3

        I think so too.

        My other thought is that ‘method’ works backwards too: if a vendor wanted to make their devices more open then the easiest first step might be to open/unlock their firmware. ie firmware is the lower-hanging fruit, so it’s what privacy-minded people focus on.

        Firmware is a good start, but having free firmware does not make the hardware free. We need free hardware too.

      2. 5

        Yes! Very good point. The conceptual boundary between hardware and software is rather fuzzier than most software people realize. At one extreme we have completely free RISC-V processor RTL that can be run on a big-enough FPGA… but the big FPGA makers encrypt their formats to protect the IP that they also sell. You can contribute to a reverse-engineering effort if you have the right FPGA sitting around.

        1. 1

          Fuzzy is a good description.

          Running free software/designs (like a RISC-V core) on an opaque non-free system seems ironic, but I guess that’s what we already do on our desktop machines :)

          Edit addendum: it’s also worth mentioning that CPUs are only one turing-complete logic family too. FPGAs (as you mention) and DSPs are also good examples. All of these are capable of performing each other’s tasks and emulating each other (with varying degrees of performance). Firmware is just another feature that can be used in this mix.

        2. 3

          Great point. I’d never thought about that before; I really appreciate you raising it. My first thought is that opaque logic is as much of a problem as opaque firmware, but I don’t know where I’ll land after digesting this. I suppose the complexity of the task being accomplished is a factor to consider, but of course since there’s no visibility into what the task is, it’s hard to actually consider that.

          1. 3

            Let’s say a manufacturer produces hardware that does a particular thing, and all the functionality is implemented in actual logic gates rather than firmware. They can’t afford to do anything too user-hostile, or nobody would buy the thing in the first place. If a bug is discovered, and a new version of the hardware is released that fixes it, the bug may be a hassle, but upgrading is also a hassle, so the new version still can’t afford to do anything too user-hostile lest the user switch to a different brand.

            Let’s say a manufacturer produces hardware that does a particular thing, and requires an unsigned, freely modifiable firmware blob. They can’t afford to do anything too user hostile, or users will just patch it out. If a bug is discovered, and a firmware update is released that fixes it, applying the update is much less of a hassle than living with the bug or migrating to a whole other system, and if the firmware update adds anything user-hostile it will be patched out pretty quickly.

            Let’s say a manufacturer produces hardware that does a particular thing, and requires a signed, proprietary firmware blob. The manufacture can release the first version with very pro-user features, and then when a bug is found, release a new version of the firmware that fixes the bug but removes the pro-user features, or adds user-hostile features. Living with the bug is annoying right now, switching to a different manufacturer is annoying right now, but applying the firmware update is easy right now, even if it’s a bad deal in the long term, so almost every human will do that.

            Firmware, as a technology, has a lot of useful and interesting engineering trade-offs, but it also can change the power-dynamic between vendors and consumers. For thousands of years, it was impractical (or even illegal) for a vendor to mess with a thing you bought after you bought it, but now firmware and wireless Internet makes it practical and even easy, for good and for ill. It’ll be a while before society’s expectations of “fairness” catch up to technology, and it’ll be interesting to see where it settles.

            1. 2

              It’s much harder to implement complex functionality in hardware, and much easier to do so in firmware, so it’s not very likely there will be malicious behavior encoded in hardware. Your machine ceases to be under your control if you permit opaque firmware to run without restriction.

              1. 8

                so it’s not very likely there will be malicious behavior encoded in hardware

                Not at all. Malicious behavior does not need to be complicated, and in fact I’d argue it is far easier to hide if it is kept simple.

                Take for example a gigabit NIC: let’s assume that received bits are shifted into 128 bits of shift register on their way in. If a certain (hard-chosen) unique pattern appears in these 128 bits then trigger a payload. If there is no big input shift-register then you could piggback the CRC checksum circuitry instead.

                The payload could be something simple, like shorting the NIC’s input lines to something related to the memory bus (or even PCIE bus), whatever is most convenient. The NIC would then start transmitting a (non-compliant) raw bitstream. Depending on the NIC implementation it might be easy to ‘accidentally’ frame this data up into proper ethernet-compliant signalling too, but this wouldn’t strictly be necessary.

                Result: a way of stealing disk encryption keys from memory. All you need to do is compromise the local router (these tend tend to have garbage updates & security models across various/most vendors), send 128 magic bits and sift through the massive reply.

                Alternatively the payload could be something that exploits existing firmware in ways that people (even with access to the firmware’s source code) could never predict. Eg trigger an execution of the LAN boot rom (switch processor into real mode, jump?) whilst the OS is still running. Now you can deliver any payload over the network. A small amount of the running computer’s memory will get trashed and you’ll probably have to fight the interrupt handler, but if it works then you again get complete remote control.

                Alternative: Wifi chipsets instead of gigabit NICs

                The attack method is a bit different here. Visit a corporate environment with lots of bandwidth & a single brand of routers to make things easier (eg companies, universities, McDonalds).

                Scanning the massive memory dump being transmitted for key strings (informant names, email addresses, website URLs, etc) would be easier than hunting for passwords; but still potentially just as damaging if you are a journalist.

                Your machine ceases to be under your control if you permit opaque firmware to run without restriction.

                Does a machine cease to be under my control if I permit opaque silicon to run without restriction? What’s the difference?

                If anything firmware can be changed, silicon cannot.

                1. 1

                  Since you’re already close, my idea was to do similar pattern matching in PCI or DDR subsystems. With IOMMU’s, it might need to be in memory unit. Local escalation through Javascript or email via a pattern the backdoored unit sees moment it’s sent to the processor. Immediately execute what follows as kernel mode.

                  1. 2

                    The ios security papers assume the baseband can be compromised. A group published a successful hack of one awhile back, but it took a team and 6 months VS an afternoon with DMA. Not perfect, but certainly raising the cost of attacks.

            2. 5

              I wonder if the signed HDMI/DP driver is due to DRM restrictions, to prevent copying of something or other?

              1. 8

                It’s almost certainly done to enforce HDCP/DRM-related requirements, though it’s a very boneheaded way of doing it.

                According to the manual, this chip also has hardware Widevine DRM support, and a restricted-access “Security” manual, which probably relates to its various DRM antifeatures.

                1. 3

                  though it’s a very boneheaded way of doing it.

                  It could be a contractual requirement, eg “must be implemented as non-optional signed firmware”. IIRC you need to agree to certain things if you want to use the HDMI name and logo (possibly more?).

                  edit: Not defending it :D I know first hand how much signed-firmware is a PITA.

              2. 3

                If you don’t have the HDMI/DP functionality is there any feasible way of using the machine still?

                I guess “the machine cannot be deblobbed” sounds true in the context of using all the functionality but you could get away with removing that if you don’t want it? Kind of like what Fedora would use for the MP3 converter and whatnot

                1. 2

                  I would think this is true, yes, if you’re OK with a headless system.

                  1. 3

                    Why headless? MIPI DSI and LVDS shouldn’t be affected by disabling the HDMI block. And for the Librem phone, I guess they would use one of these connectors.

                    1. 3

                      This is a fair point.

                      Personally the fact that the boot ROM has this antifeature in it is still annoying to me, but it doesn’t seem like it’s necessarily a dealbreaker for a design which doesn’t rely on HDMI or DP.

                      Note that the MNT Reform, which has been getting a lot of attention as an i.MX8-based design recently, claims to attach the display via eDP, so it seems safe to say it won’t be deblobbable.

                      1. 25

                        Hello! This is false. We run the display from MIPI DSI via an eDP bridge. There is no HDMI blob required for Reform. The main problem is the DDR PHY blob, but it is not signed. We have a disassembly (it’s ARCompact code). We are looking for the Synopsys databooks to be able to analyze it.

                        1. 4

                          Nice. I stand corrected.

                2. 3

                  Wonder if it actually checks the crypto or just a string match on the issuer. Could you maybe try that?

                  1. 4

                    I can’t see any chance they’re that stupid, they know what they’re doing when they implement something like this. This doesn’t eliminate the possibility of more subtle vulnerabilities.

                    I don’t have/intend to procure any i.MX8M devices, so I don’t possess a copy of its boot ROM. Anyone want to dump it?

                    1. 6

                      Stupider things have happened.

                      1. 6