1. 20
  1. 22

    This post and some of the comments there make for some… pretty weird reading.

    To quip at the greatest philosopher who never lived, Bender J. Rodriguez, this is one of those cases where being technically correct isn’t the best kind of correct. Because, on the one hand, everything that mjg points out is true: Pluton is basically just another (general purpose) security processor, which Windows uses like any other security processor at the moment. Except it’s built by Microsoft so they can better integrate it with their OS, which honestly, if this were a Linux vendor and not Microsoft, is an idea we’d be cheering for. So admittedly, if you can still boot any OS you want on machines that have TPMs now, it’s a no-brainer that you should currently be able to do so on Pluton as well.

    But on the other hand…

    1. The “currently” part isn’t really how threats work. If you possess the capability to do something, that is the threat. Like, say, nuclear weapons (or chemical weapons, or mass surveillance legislation, or any other threat): if someone has them, that’s a threat, no matter how categorically they insist they aren’t going to use them and they’re built and operated for entirely different reasons (deterrence, medical studies, fighting organised crime). It’s certainly true that some threats are more unlikely than others, but realistically now, on a scale from “water pistol” to “nuclear weapons”, a TPM is so close to “water pistol” that the comparison is not even funny, and it’s also not something too many people other than computer nerds care about. The “purely theoretical” dial can’t go too high on this one.

    2. The reason why #1 happens, and what this analysis doesn’t take into account, is the fact that Microsoft isn’t a small FOSS team ran by nerds. It not only has the capability to take a sound technical solution and weaponize it against anyone who threatens their bottom line, including those damn users, it’s basically their modus operandi. Current intentions, even when announced in official marketing documents, are largely meaningless. Not because this is some grand deception operation mounted by Satya’s elite information warfare squad but because it’s just how large companies work. The people who came up with Pluton, the people who wrote the marketing docs, and the people who will come up with other ideas about what to do with this general-purpose security processor aren’t the same people, they won’t have to talk to each other too much (or, after a while, at all!) and they don’t need each other’s approval for anything. Hell with enough backroom dealing they don’t even have to work for Microsoft. If Microsoft does decide to kick it up a notch and restrict booting other systems on Pluton-enabled hardware, what exactly do you think is going to happen, the Pluton PM is going to walk into some exec’s room and say come on, man, Pluton was meant to be better than that, and they’ll revert their toxic decision in shame and despair?

    That’s why the fact that some things happen one way or another right now are largely irrelevant for this discussion. Yes, if Microsoft wanted to pull the plug on other systems, they could do so now. On the one hand maybe it’s just not something they want. On the other hand maybe they just think they’re not in a good enough position for that right now. Pointing out all the ways in which they’re not in a good enough position for that right now has no bearing on their future plans.

    1. It also – equally… I don’t want to say naively, because it’s really too bad a word and I don’t think the author is naive, let’s say in equally unwarranted good faith, it also assumes that a general-purpose security processor with firmware update capabilities will be used in the same manner for, say, consumer devices and COTS equipment for enterprise, federal government and military use. All the folks in that long second part want some of the good parts (like remote device attestation) but also obviously don’t want to put Microsoft in charge of their hardware. What the folks in the former category want isn’t really relevant because they don’t really have any strings to pull. Nobody says Microsoft has to ship them the same firmware.

    2. I realize that’s probably what the author mostly cares about so I don’t want to say they’re wrong to equal software freedom with being able to boot any OS they want. But it’s worth remembering – in the context of things like device attestation – that, when it comes to consumer devices, Linux, BSD & friends users account for so little revenue that even annoying them is probably not important enough to make them come up on any agenda. But the ability to deny users the possibility to, say, sign in to their Windows computers is real and easy to enforce. There are probably more people who ended up signing in with Microsoft accounts because Microsoft kept mucking with the “Use an offline account” button than there are Linux users in the world. Their freedom to use computers in any way they like (as long as they run Windows, which they’re presumably fine with) is also a thing and while it’s not currently in a worse state than six months ago, it can now be more conveniently subverted through yet another technical method.

    (All this is from a mostly theoretical standpoint, I might add – I’m personally not opposed to locked-down devices (I don’t really want one but I understand why some people, and some orgs, do want one). Also, I think the free software community does a good enough job at sabotaging free software that I don’t think it’s worth pouring money into hardware that would threaten software freedom. If Pluton is a threat to software freedom (and I think it is, albeit from other positions), it’s probably not its main goal. This isn’t the day after the Halloween documents – Microsoft, um, sorry, MICROS~1 may still be some evil trolls, but they’re not the same evil trolls from 23 years ago, the way you do evil has changed, and I doubt they hold grudges to the point where they’d still drag the same thing for twenty bloody years.)

    1. 3

      Thanks for writing down my thoughts. I agree with 99% of this. I can’t remember who wrote it recently in a completely different context, but it matches here: Don’t build capabilities you don’t want to use, because your successor will.

      The ironic thing is that if Intel+MS try to lock down platforms, M1 Macs and Apple may become the open (as in not locked) alternative. And if that happens my brain will melt from the cognitive dissonance.

      Also, if the worst case is really what will happen, we learned the last E step from the EEE started with WSL.

      1. 5

        If Apple says “the Mac is a general purpose computer that will boot unsigned kernels”, why on earth would Microsoft ever consider trying to lock down the PC?

        Microsoft has already tried making an iPad-like locked-down platform on the side (“Windows RT”). It failed miserably and the current Windows-on-ARM has pretty much nothing to do with it (e.g. the Surface Pro X is also positioned as a general purpose computer and has the exact same Secure Boot policy as x86 PCs).

        1. 5

          I think one of the reasons why these things are so controversial is that we keep coming at it from the wrong angle, like someone’s trying to slap a TPM on a PC from the nineties in order to prevent the kind of threats Microsoft needed to prevent in the nineties.

          But I don’t think this is the right context to think about these things. I don’t think Microsoft is trying to be the biggest OS vendor for PC compatibles anymore – that hasn’t been a high-margin market in at least 10 years, if not more. They’ve long moved on to an array of other things, like providing cloud infrastructure and selling clients of various thicknesses for their (and others’) cloud-powered services.

          So – and I think this is the part that I should’ve spelled out more clearly in my previous post but I kindda wrote it in a rush – IMHO, from a largely theoretical standpoint, I think this is a “threat”, the way the article puts it, but one whose target is already starting to tread the “theoretical” threshold. It’s a threat to a model of computing (or a model of using and managing computing devices?) that time has rendered barely relevant outside a few niches (like most software development) which any platform vendor worth their salt would be trying to facilitate, anyway. So it’s not remote as in “it’s unlikely they’d ever do that”, it’s remote as in “it’s unlikely they’d ever need that”.

          Locking down PCs for OS exclusion purposes really doesn’t sound like the kind of stuff Microsoft would be throwing money at in 2021, especially not custom hardware development kind of money. What would it earn them, ensuring nobody could run Linux on those PCs? There’s barely anyone who wants to do that in the first place, and they even got a good chunk of that crowd covered with WSL2. Maybe back in 1998 they would’ve been interested in it, but now, when “the year of Linux on the desktop” is a joke you have to explain to junior devs, I doubt it matters. The world has moved on, and the fact that Microsoft is not only not bankrupt but going pretty damn well is good evidence that they’ve moved on as well.

          And this is only the charitable tip of the iceberg, revolving around common sense arguments, the kind mjg makes in this article. Most of the “but muh software freedom” talk revolving around TPMs is very much FUD, as @david_chisnall called it here).

          There’s obviously all sorts of other applications that are worth considering here, like DRM and media content delivery. That’s a whole other story but I think a good chunk of the free software community is already so culturally estranged from these matters that there’s hardly anyone left there to debate it.

        2. 4

          The ironic thing is that if Intel+MS try to lock down platforms, M1 Macs and Apple may become the open (as in not locked) alternative.

          I don’t pretend to know all the details, but Ariadne Conill states that the M1 is already reasonably open:

          https://ariadne.space/2021/10/19/trustworthy-computing-in-2021/ (scroll down to the section titled “Apple MacBook Air M1”)

          1. 2

            It is reasonably open now. I meant that consumer Intel platforms could become more closed than M1.

      2. 19

        [ Disclaimer: I work for Microsoft Research, I am not involved in any of the Pluton-related product processes, though I am a hardware-security geek and so have been paying attention to some of these things internally ]

        There’s a phenomenal amount of FUD being written about Pluton in the last few weeks (Semi Accurate was one of the worst offenders here, which was a shame because they usually have a much higher standard of journalism). A lot of this seems to be coming from the fact that most of our public information is mostly in the ‘Pluton is amazing and solves all of teh securities!’ category and very little on what Pluton actually is. I’ve actually read the Pluton specs and talked to some of the folks that worked on it, so let’s see if I can do a bit better without sharing anything confidential:

        Pluton is a hardware root of trust. There are quite a few of these in the industry. You can license IP cores from a few vendors that fill this role in various different ways. Google has one called Titan, an open source version called OpenTitan (now maintained by the lowRISC Foundation). Apple’s Secure Element is one. Basically every commodity SoC except the ones that Windows runs on provides one.

        The first thing you want a hardware RoT for is secure boot. Secure boot is usually implemented with some variant of the DICE protocol. This is quite complex but the basic idea is fairly simple:

        One of core building blocks is a Platform Configuration Register (PCR). These are registers that effectively do running cryptographic hashes: you write data to them and you read back the hash of all of the data that’s been read from them. This means that, unless you can break the underlying hash algorithm, you can’t get them into a given state without writing the set of values to them in the same order.

        When you do secure boot, you have a chain of boot loaders, each of which adds a measurement of the next one to a PCR and then loads it. This means that even if you compromise a later bootloader, it can’t pretend to be a different one. For example, if UEFI boots GRUB and I exploit a GRUB vulnerability then I can’t fake the PCR values of NTLDR or Linux booting directly from UEFI, but I can fake the PCR values of GRUB loading anything (GRUB can load a malicious kernel and pretend that it’s loaded a trusted Linux build).

        The first stage of this process is therefore the most important. If you can compromise that then you can fake any later PCR state. A hardware RoT typically has a tiny bootloader in ROM that reads a second-stage loader from ROM, verifies its signature, and boots it. If the ROM version has a vulnerability then you have a big problem and so this is usually trivial (on the order of tens of instructions). The second-stage loader may be compromised (it’s also quite simple, but not that simple) and so the hardware also needs some anti-rollback support. If a vulnerability is found in the second-stage loader, you can issue an instruction to the hardware RoT that tells it to blow a fuse in a bank that provides a unary counter. After this, it will refuse to launch an older version (conceptually, there’s a version baked into the second-stage loader image. In practice, this is done with key-derivation functions and is sufficiently complex that if I tried to explain it from memory then I’d definitely get it wrong). Once you get to the second-stage bootloader, the equivalent anti-rollback protection for the third-stage loader (GRUB, NTLDR, whatever) can be stored in read-write non-volatile storage accessible to the loader, so you can prevent a compromised version of GRUB from being loaded without consuming any non-recoverable resources.

        Once you have this kind of mechanism, you can use the PCR states to drive ACLs for keys. Pluton has storage for a set of keys, hardware crypto engines for {de,en}crypting data, signing data, and deriving keys. Each key has an ACL with a fairly rich (by hardware standards) policy language. You can control whether keys can be extracted (for example, for disk encryption keys that you want to provide to the kernel, but only to a trusted kernel) and what operations they can be used for (including, if they are used as input to a KDF, what the derived keys can be used for).

        You can also use this to provide remote attestation by asking the hardware RoT to provide a cryptographic signature over the PCR state with some internal keys. Typically, each device has a random key burned into fuses on first boot (Pluton does this, I believe some other RoTs use PUFs). This can then be used with a key-derivation function, the public key exported, and then signed so that, for example, AMD can sign a certificate that verifies that this is a Pluton core in a real AMD chip. This then forms a signature chain where AMD says this is a real Pluton core, Pluton says that it’s loaded a specific version of the second-stage loader, the second-stage loader says that it’s loaded a specific GRUB instance, GRUB says that it’s loaded a specific build of Linux + an initial RAM disk, and between them all you know exactly what kernel your device is running and can authorise it to connect to your service based on this.

        The remote attestation is most useful (or, at least, most interesting to me) in combination with cloud services and Confidential Computing hardware such as Intel TDX, AMD SNP-SEV, or Arm CCA. With these, you provide an encrypted (or, at least, signed) VM image to the cloud provider. It is loaded into isolated memory that the hypervisor can’t see, and the hardware RoT provides a signature chain that lets you, the remote customer, validate that it’s really running in an environment that the cloud provider can’t access or tamper with, before you provide the VM with any confidential data.

        In most PCs, this is roughly equivalent to a TPM and in the AMD version I believe Pluton just exposes a TPM-compatible interface. There are two common ways of implementing a TPM. The first is as a separate chip. This has an obvious problem: you can lie to it. There are wires going from the CPU to the TPM and if you want to set a PCR to a particular value then you just have to intercept the messages sent by a legitimate boot that sets it there and then replay them from a separate IC. If you’re using it for hard-disk encryption then an attacker first boots your PC and records the traces sent to the TPM as the OS boots. Then they reset the TPM and play back the same sequence. Now they have your disk encryption key. This requires specialised (though fairly cheap) hardware and a targeted attack, so it probably doesn’t matter for most people’s threat models. The other approach is to implement the TPM in firmware on the CPU. Unfortunately, these are very hard to defend against side channel attacks that can leak the keys.

        To support all of this, Pluton has some storage for keys, a cryptographically secure hardware random number generators and some crypto engines. Getting the random number generators right is, I’m told, one of the reasons why you want something well tested rather than rolling your own: EDA toolchains will optimise them away if you don’t provide exactly the right configuration. If you’re lucky, you’ll get something that always gives you ones or zeroes (which is definitely not random and so easy to detect). If you’re unlucky then you’ll get something that gives you the value of some other register or wire on the chip and won’t discover that an attacker can compromise the random numbers until much later.

        Having something like this in your computer is very important for end-to-end security for web things. If you’re doing WebAuthn, then you really want something that can hold keys, refuses to release them and uses them only for signing. If an attacker compromises your OS, then they can mount online attacks against your remote services but they can’t exfiltrate your keys and use them from other systems. Most mobile phones and more modern Macs have hardware that can do this already. Windows uses a TPM if one is available but this is vulnerable to any of the above attacks. If you’re using passwords, then a compromise of your OS kernel, your browser, your password manager, or anything else that has access to the passwords allows the attacker to copy the password and then log into the service later from a different machine. Not that doing this doesn’t require unbounded storage space in the TPM - you typically provide a single key that’s used for WebAuthn and then for each site use a KDF from that key and some site-specific information, rather than a separate key in the TPM for each site.

        Pluton is already widely deployed. Every Xbox since the Xbox One has had one. It has a load of countermeasures to protect against attackers with physical access (the exact ones that are present are, I believe, confidential). I believe these mitigations cover the attacks used a few months back to compromise the AMD Platform Security Processor (PSP), which made it possible to compromise AMD’s SEV-SNP (if you have physical access to an AMD Milan CPU, you could exfiltrate the keys that would let you run an emulator on a different machine that would provide attestations as if they came from that CPU). It’s also used in Azure Sphere. People with physical access have been trying to attack it for a while.

        Pluton in AMD chips is basically a good implementation of a TPM. It will probably come pre-loaded with a public key to verify Windows boot images, but I don’t believe there’s anything preventing users or manufacturers from loading additional keys (as I recall, the Secure Boot spec mandates that users must have a way of doing this). The key used to verify the integrity of your bootloader will affect the attestations that you get, but that’s a good thing: If you’re running Linux or FreeBSD on your machine, you want to ensure that the TPM won’t release your LUKS / GELI keys to Windows, just as if you’re running Windows then you want to ensure that the TPM won’t release your BitLocker keys to Linux / FreeBSD.

        1. 8

          Thanks for your detailed and factual reply. I agree with the need for trusted computing in general (and have worked on some of the very zero-trust problems you mention, so I have a deep appreciation for what this enables); I think the problem is that the “client” for this device is The Big Guys™ and not us small people, though. I think I speak for a group of people that form a majority on niche websites like this one but are otherwise a completely insignificant minority in the grand scheme of things so far as these big players are concerned: some of the things that TPM does in practice (not just can do in theory) are fundamentally at odds with “the hacker ethos” that makes us enjoy working with computers in the first place.

          For example:

          If you’re running Linux or FreeBSD on your machine, you want to ensure that the TPM won’t release your LUKS / GELI keys to Windows, just as if you’re running Windows then you want to ensure that the TPM won’t release your BitLocker keys to Linux / FreeBSD.

          Yes, but look at it from the perspective of a certain class of users: what if I want my FreeBSD to be able to access my otherwise encrypted Windows partition? What if I want to use the (now unsigned) graphics card drivers that I have personally modded to reduce the fan noise because I know that for my personal use cases the reduced cooling is sufficient? What if I don’t want real-time Windows Defender scanning enabled at all times because I’m a developer and it slows down compilation ten-fold, and I am willing to “attest” that I won’t run shady software so you don’t need to scan my device? These are all things that TPMs in combination with a secure boot chain extending all the way to the OS and all loaded drivers can and does prevent. It’s no longer my PC despite the fact that I built it myself from components I hand-picked (and have the heatsink scars to prove it); the PC is now hardware that I bought and paid for but can only use if the powers that be at Microsoft + Nvidia + AMD/Intel + Netflix/whomever agree that my use case is “sufficiently valid and worth supporting.”

          Yes, an obvious answer is “use a different OS” but then Nvidia doesn’t provide hardware accelerated drivers for that platform, or they do but there isn’t a strong enough DRM (or there’s now a lack of remote attestation guaranteeing the existence of that untampered DRM) for Netflix to allow me to watch the titles I pay a monthly access fee for because their licensors insist that they can’t stream a movie already freely available on TPB from before it was even released on Blu-Ray if my viewing environment isn’t “provably secure enough.”

          1. 7

            Yes, but look at it from the perspective of a certain class of users: what if I want my FreeBSD to be able to access my otherwise encrypted Windows partition?

            If you want two operating systems to be able to share the same key then you will need both to agree to permit it. I don’t know if Windows will have a mechanism for this. It’s very difficult to make secure because now Windows and FreeBSD are both in your TCB. Allowing your FreeBSD install but not a random FreeBSD boot disk to access your Windows partition is a difficult policy to write.

            What if I want to use the (now unsigned) graphics card drivers that I have personally modded to reduce the fan noise because I know that for my personal use cases the reduced cooling is sufficient?

            This is a fun one because it touches a load of different things. Will the kernel load your unsigned device drivers? Windows will (if you jump through enough hoops), but some DRM-related things are then disabled (because they rely on being able to trust all software running in ring 0 and you could trivially bypass them if you could run arbitrary code in ring-0).

            My personal belief here is that the right fix here is legislative, not technical. DRM, to me, is vigilante action. It enforces a tighter set of constraints than copyright law permits in an extra-legal fashion and in such a way that causes direct harm to individuals and society. If I were drafting laws, shipping a product with DRM would be classed in the same way as connecting your front-door handle to the mains while your shop is closed. There’s no technical fix that prevents people shipping these and, to date, all legislation related to DRM has taken the opposite view to mine. Hopefully a new generation of legislators who grew up understanding that digital data is not the same as a physical object will fix this.

            What if I don’t want real-time Windows Defender scanning enabled at all times because I’m a developer and it slows down compilation ten-fold, and I am willing to “attest” that I won’t run shady software so you don’t need to scan my device?

            Well, today you can exclude build directories from WD scanning (and it’s a really good idea to). The more fun thing here is when you couple it with attested and reproduceable builds: if you run your compiler and linker in an isolated VM (ideally a confidential VM) that captures the git hash of the source and the hashes of all dependencies in the chain, then you can stamp this into the final binary and end up with a signed binary that attests to its provenance. You could then have WD policies that permit you to run anything you compile yourself without scanning, or things compiled by you from specific git trees, but with aggressive scanning for everything else.

            These are all things that TPMs in combination with a secure boot chain extending all the way to the OS and all loaded drivers can and does prevent. It’s no longer my PC despite the fact that I built it myself from components I hand-picked (and have the heatsink scars to prove it); the PC is now hardware that I bought and paid for but can only use if the powers that be at Microsoft + Nvidia + AMD/Intel + Netflix/whomever agree that my use case is “sufficiently valid and worth supporting.”

            Again, I think this is a problem with a legislative, not technical, solution. In the cloud computing scenario, there is enormous value in the computer (or, at least, an isolated VM on the computer) being under the complete control of whoever is running the software on it, not whoever bought the hardware. In the client device scenario, there are some valid use cases for the computer not being under the complete control of software that whoever bought it deployed (because that software is actually under the control of an attacker), such as preventing my web browser from knowing the signing key to my WebAuthn identity, preventing my GPU driver from knowing my disk encryption key, and so on. Unfortunately, it’s impossible to build a technical solution to the first two of these without it also being possible to implement the more dubious things.

            This makes any trusted computing system a dual-use technology. We have a lot of legislation already dealing with dual-use technologies in various different domains. Cars, drugs, even cleaning products and fertiliser can be used for a lot of positive purposes, but also as weapons. We frame laws to try to reduce the chance of people using them as weapons and to make sure that it’s easy to at least do post-facto attribution if they do.

            Yes, an obvious answer is “use a different OS” but then Nvidia doesn’t provide hardware accelerated drivers for that platform, or they do but there isn’t a strong enough DRM (or there’s now a lack of remote attestation guaranteeing the existence of that untampered DRM) for Netflix to allow me to watch the titles I pay a monthly access fee for because their licensors insist that they can’t stream a movie already freely available on TPB from before it was even released on Blu-Ray if my viewing environment isn’t “provably secure enough.”

            There’s a simple legislative fix possible here: Make copyright and DRM an either-or proposition. If you distribute your movie with DRM then you give up all protections under copyright law. If someone bypasses the DRM then the work enters the public domain. Enact that and I bet that DRM would disappear within a week.

            If that isn’t possible then antitrust investigations into DRM purveyors would be the next best thing. Apple killed DRM on music by forcing a market antitrust reaction. The iTunes Music Store was the only way of getting DRM’d music onto iPods. iPods had something like 70% of the portable music player market (and a large share of the market among people who had disposable income to spend on music). If you wanted to sell music to these people then you had the choice of either paying Apple a 30% cut of revenue or selling without DRM. The music industry decided that option 2 was preferable (and then discovered that DRM-free music actually increased sales overall). DRM on media distorts the market in client devices by preventing playback on non-authorised platforms. Existing antitrust laws cover this kind of market distortion, it just requires an FTC chairperson who is willing to enforce it. The current holder of that position is the first one in my lifetime who I’ve thought might actually be willing to do so.

            Corporations are very good are responding to liabilities. If distributing a system with DRM that restricts what an end user can do and infringes on fair use / fair dealings (e.g. limiting the playback platforms) risks antitrust investigation and significant damages, DRM will go away very quickly.

            The weird thing at the moment is that the companies insisting on DRM for movies are the copyright holders but the companies that benefit from DRM are the owners of the streaming platforms. I don’t think Universal, say, benefits from Netflix entrenching its market position, yet they’re happy to enforce contract requirements that do exactly that. If an antitrust investigation targets the movie studios and points out to them that they’re not actually the ones benefiting from the market manipulation that they’re engaging in then I wouldn’t be surprised if they reversed their conditions very quickly.

            1. 3

              I like your suggestions! The tech has definitely outpaced the legislature on this though, and it concerns me that our industry doesn’t really know how to take that into account (not just TPM, see also facial recognition and other malicious uses of AI). It’s one thing for rogue parties to be pushing this stuff, but for the leading tech companies to do the same is troubling. The average home user stands to lose much more than they gain (perhaps without their realization) from having all this tech on their PC in service of others.

        2. 6

          Edit: it’s been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you’re running. There’s various reasons I don’t think this is realistic - one is that there’s just way too much variability in measurements for it to be practical to write a policy that’s strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you.

          Netflix already prevents (not because they care but because their licensors do) their highest definition streams from being accessed on a “non-compliant” browser that lacks the DRM. They also don’t provide a compliant browser for all operating system makes/models/versions, and that’s without widespread remote attestation.

          As to the second point… Does the author not realize that remote attestation could (does?) prevent exactly the workaround he suggests?

          you can just pass the request through to a machine that is running the appropriate software and have it attest for you

          No, no you can’t. That’s the whole point of a “trusted computing environment,” a “verified software stack,” and “remote attestation” in the first place. A suitably locked down machine that passes remote attestation might be prevented from doing exactly that in combination with other software/hardware protections. e.g. DRM is supposed to prevent the user from decoding a Netflix stream and forwarding it to this other machine that wouldn’t pass remote attestation because remote attestation could be used to require that a complaint, unpatched, locked-down browser rendering the hi-def stream directly onto a surface provided by the verified, version-locked drivers of your provably secure, signed GPU’s firmware and viewed directly on the HDCP (again, attested) display. There are holes in how this works today but universal TPM deployment, secure boot, and remote attestation, combined with the existing (and very much in use) hardware/software DRM schemes would go a long way to making that impossible.

          1. 4

            This is a nice counter to some of the more panicked readings I’ve seen - thanks for posting.

            Until we know more, I’m inclined to agree with the author’s general position: Pluton at the moment looks like a different type of TPM, and TPMs haven’t killed free software yet despite similar concerns having been raised about them in the past.

            Honestly, I’m more concerned that Pluton will take off, but be proprietary and locked down so that nobody who doesn’t run Windows will see the benefits. That would be a shame.

            Just to dilute the optimistic tone: I’m also looking forward to seeing the remote update mechanism get compromised for some really exciting, persistent attacks. Taking bets on whether researchers publish an attack before a nation state gets caught exploiting in the wild.

            1. 3

              I remember back when Pluton was called Palladium…

              1. 3

                Currently it’s not even really “a different type”, the current production firmware offers the exact same TPM 2.0 interface as the other stuff. Unless the transport protocol is different (HI GOOGLE), it should Just Work™ for your SSH needs or whatever.

                compromised

                Heh, yeah, one fun worry is that with AMD, Intel and Qualcomm all using Microsoft’s firmware, one compromise would affect all three vendors, so buying devices from different vendors is no longer meaningful.

              2. 4

                My humble take is: the problem is not Pluton the processor, but Pluton’s firmware.

                If the firmware were FLOSS and people were allowed to load (using hw-specific keys held by the owner of the device) their own version of the firmware to the embedded security processor, all these worries would be dispelled.

                Yes, remote attestation would not work anymore for DRM, but that never was the intent, right?

                1. 3

                  The author found some marketing slides, and thinks they know what this thing is? Oh please.

                  The whole reason I’m alarmed by this move by microsoft is that it’s a black box, and no amount of marketing slides is going to tell us what this thing actually is, and what it can/can’t actually do, and so on. I also haven’t seen anything in the last ~30 years that convinces me that I should trust microsoft completely based on their word alone, which is literally all the information we’ll get about this pluton thing… (unless useful details are leaked, or forced out of microsoft by a court (lol))

                  1. 21

                    The author found some marketing slides, and thinks they know what this thing is? Oh please.

                    This could apply to a random person, but Matthew Garrett is basically the person heavily involved in secureboot, uefi and a few other things in Linux kernel. When he’s guessing things based on a marketing slide deck about security chips, I think it’s worth at least considering the implications of his guesses.

                    1. 0

                      hmm.. no.. it doesn’t really matter who it is, they have no more information about it than the rest of us in this case. Being a celebrity doesn’t change that fact.

                      1. 7

                        It’s not about celebrity status. It’s about having enough experience with tech that you can point out actually possible/likely tech behind an announcement. It’s about knowing the vocabulary used by previous similar announcements. It’s about being involved in development and seeing trends and what’s the likely next step planned / element missing in a given architecture.

                        He doesn’t even try to sell any information as certain. The whole post starts with: “that’s hard because Microsoft haven’t actually provided much in the way of technical detail. The best I’ve found is a discussion of Pluton in the context of Azure Sphere, Microsoft’s IoT security platform.”

                        1. 1

                          Yeah that’s entirely my point. Mathew has no idea what this thing is, really. No one does, because microsoft is being vague on purpose. And yet, Mathew’s post title is fairly certain that it’s “not a threat”, which makes no sense.

                    2. 4

                      The whole reason I’m alarmed by this move by microsoft is that it’s a black box, and no amount of marketing slides is going to tell us what this thing actually is, and what it can/can’t actually do, and so on.

                      This already applies to your CPU as it is though, and if you don’t trust your CPU vendor with what they put in their CPUs, why were you implicitly doing that until something with Microsoft’s name on it came up? Do you think that AMD doesn’t know exactly what it can do?