1. 35
  1. 13

    Once again this comes back to the social problems of trust and hierarchy. I’m not against cryptographically secure hardware, or even necessarily some of the institutional structures the author speculates about. The encrypted document system he describes could work really well for something like confidential medical records, for example. But do I trust Microsoft, or for that matter any government that could potentially compel Microsoft to surveil their users, to be the stewards of that confidentiality? Absolutely not.

    This is a systemic problem that will require solutions with technical, economic, and political dimensions.

    1. 12

      Every new wave of security features causes a cry of IMPENDING DOOM like this from someone from the Linux community, and yet somehow we persevere.

      I don’t have any in depth knowledge of Pluton but is it far fetched to think this will be more of the same?

      1. 37

        Or, depending on your perspective, every new warning sign causes a cry of IMPENDING DOOM like this, and we keep ignoring them.

        The goalposts have already moved somewhat: as I’m typing the top thread right here is a bunch of people wandering off-topic to talk about how DRM is good actually, while down below a Microsoft employee is lecturing us about how it’s a basic requirement for any modern computer to run only software signed by Microsoft (but don’t worry, Microsoft currently deigns to sign software for some Linux distributors). I think both of these opinions would have been considered laughable a decade ago.

        I think the reason for the gap between the reactions you’re seeing from commentators and the practical results you’re experiencing is optimism. Microsoft effectively controls what software many of us can run on our PCs now, and this step is likely to extend their reach. Yes, they currently give us permission to run quite a lot of software, but they could change their minds at any time. The permissiveness of today’s policy translates into hardware which lets us execute arbitrary code with a little effort, but auto-updating firmware seems like an effective way to close that loophole.

        So some people are angry because Microsoft is establishing more control of the platform, while your world keeps turning because they haven’t exercised that control in ways that interfere with you. Maybe they never will. I’m not that optimistic.

        1. 17

          at the risk of invoking me-too flags, thank you, I think you perfectly captured my sentiment with this comment. microsoft absolutely cannot be trusted to continue letting people use their[0] computers as they see fit in the future.

          1. “their”… who owns it? you paid for it, but I’d argue that microsoft owns it.
          1. [Comment removed by moderator pushcx: Please don't dismiss other people's opinions like this. You can repost without that, but please reduce your hyperbole overall in the thread.]

            1. 5

              For me the word “security” is essentially synonymous with “control” or “access control” - the reason this fight keeps happening is because the “security” referred to is usually predicated on a centralized root of trust being “secure”. However that is systemic risk and gp is making a fair criticism. Everyone is just dead set on solving the wrong problem while the people who care are literally on the street.

              1. 6

                Everyone is just dead set on solving the wrong problem while the people who care are literally on the street.

                “My brand-new multi-thousand-dollar computing device is not quite as morally pure as my personal ethos of software freedom demands it to be” is not, so far as I am aware, a common complaint registered by people “on the street”.

                1. 10

                  “My brand-new multi-thousand-dollar computing device is not quite as morally pure as my personal ethos of software freedom demands it to be” is not, so far as I am aware, a common complaint registered by people “on the street”.

                  Well I have news for you. The cheap garbage that us peasants can afford tends to be some of the most locked-down. Take, for example, my Motorola G6 smartphone. It came from Amazon, bought as a Prime “exclusive”, new for a really good price. There were strings attached, but I didn’t realize it until it was too late.

                  The bootloader is locked. It stopped receiving security updates in 2020, two years after I bought it. I cannot unlock the bootloader through Motorola’s unlocking program, because the device was gotten through Amazon. I cannot install an up to date ROM on the thing. My hands are tied.

                  This is what happens when manufacturers have nearly complete control over the devices that you bought and paid for. You are at their mercy. They can force planned obsolescence on you, or prevent you from maintaining a perfectly good and usable piece of hardware.

                  1. 3

                    So because you have a device that doesn’t let you end-run around the bootloader, a device that does let you work with its TPM to replace with a system of your choice is bad?

                    Because that appears to be what you’re arguing.

                    Or if you’re arguing that the ability to do a bad thing with them means we have to forbid entire classes of technologies from existing, I would refer you to precedent established by the move industry’s arguments that the ability to do piracy with it means we must outlaw the VCR.

                    1. 5

                      So because you have a device that doesn’t let you end-run around the bootloader, a device that does let you work with its TPM to replace with a system of your choice is bad?

                      No.

                      I seem to recall from upthread, it was stated that the ability to unlock the device could be disabled by the OEM? So yes, this is another Android. My Motorola device can be unlocked. Other people who have that model have unlocked them through Motorola. Mine cannot be unlocked, because the OEM prevents it.

                      So I’m not arguing for or against Pluton. I am arguing against manufacturers being able to control the things I purchase. I know from the history of Android that there are going to be OEMs selling locked-down garbage PCs. More likely than not, people will pay a premium for an unlockable device.

                      Or if you’re arguing that the ability to do a bad thing with them means we have to forbid entire classes of technologies from existing, I would refer you to precedent established by the move industry’s arguments that the ability to do piracy with it means we must outlaw the VCR.

                      The answer might be some kind of right to repair law that prevents manufacturers from locking customers out of the things they purchase. I’m not holding my breath, because the government here in the US is of, by, and for the corporations.

                      1. 3

                        So because you have a device that doesn’t let you end-run around the bootloader, a device that does let you work with its TPM to replace with a system of your choice is bad?

                        The problem, as mentioned in the comments and the article, is that you can not replace this with a system of your choice. I mean, you can, but only if Microsoft thinks that’s good for ya.

                        1. 2

                          So, consider Apple, because there’s a much longer and easier-to-refer-to relevant history.

                          Apple is infamous for selling locked-down phones and locked-down tablets, which you cannot replace the operating system on (via any official/supported means – people who’ve managed it have done so by exploiting bugs).

                          And they’re not particularly new. The first iPhone was announced in 2007, the iPad in 2010, so both of them have over a decade of presence on the market.

                          Yet today, I still cannot go to Apple and buy a laptop or desktop system that is locked down like the iPhone or the iPad. Obviously Apple has the technological capability to build a locked-down device, as they’ve been demonstrating for 15 years now with the iPhone.

                          But they haven’t. This hasn’t stopped countless threads predicting that any minute now they will, but every one of those predictions has turned out to be wrong. Everything that has made the jump to the laptop/desktop line – like signing of executables – has been merely a default posture of enhanced security, which sufficiently motivated and technical users can bypass via documented and supported methods.

                          They’ve had 15 years to boil that frog. The public is as primed for it as they’ll ever be. Arguably less so now than they would have been a couple years ago, even. So what is Apple waiting for, if their goal really is to get to a situation where the laptops and desktops are as locked down as the iDevices?

                          And the answer is: they’re not trying to boil the frog. They’re not trying to be evil. They’re not locking things down out of malice or because they hate and want to stop “general-purpose computing”, or any of the other things that get trotted out in threads like this.

                          They’re doing it because they see a market demand for security, and they’re trying to figure out the tradeoffs that will work to satisfy that market demand.

                          Which is why you see phones and tablets treated differently than laptops and desktops. It’s not that Apple or Microsoft or others somehow lack the technical capability to lock down the latter categories or haven’t got round to it yet or are waiting for the right moment to flip the switch and end “general-purpose computing”. It’s that they know the level of locking down that’s done on phones and tablets is not a market winner on laptops and desktops. They know that the market perceives certain devices as “general purpose” (desktops, most laptop-form-factor devices) and expects them to work one way, and perceives certain other devices (phones, tablets, Chromebook-type devices) as not “general purpose” and generally are happy to have them work a different way.

                          So every single time there’s been a statement that “they haven’t locked it down like that yet” with explicit or implicit prediction of “but they will soon!” attached to it, that prediction has turned out to be factually wrong. The frog isn’t boiling, because that’s not and never was what was going on, and the explanation for what actually is going on is incredibly banal.

                          Yet every time some company works on trying to improve security within the framework of tradeoffs the market will actually accept, we get one of these threads, and another round of confident “Well, they’re not doing it yet” posts about how “general-purpose computing” is under attack and will soon be taken away from us. Despite them never ever having been correct.

                    2. 2

                      But currently the only possible outcome, if you refuse to bend to coercive systems, is to end on the street. I can confirm that as well as indifference w.r.t. potential avenues for improving the current state of things.

                      Unless rewarded by current systems of evaluation, people will generally not change their behaviour, even if their actions contradict reality (e.g. trusting a foreign assessment rather than the situation in front of you, most clearly seen in the education system where students will prepare for an exam rather than study a subject and then they will feel good / bad about their level of skill relative to the grade they get rather than the level of understanding they feel they’ve obtained).

                      Edit: sorry for editing the example. there are numerous examples ofc, I just get lawyered a lot by people who think they are playing an adversarial game, so it’s usually frustrating to leave a hole or two in a non-critical part of the argument.

                      1. 6

                        If I want a device that’s much more difficult for a malicious actor to compromise, I don’t see it as “coercive” when a manufacturer makes and sells one to me.

                        I mean, there is an extent to which it’s “coercive” of me not to allow you to show up and run whatever you want on my devices, but I think you’ll find that most people do not adopt that view. Which is why I tend to frame it satirically in terms of people wanting to grant “Freedom Negative One” – the freedom of everyone else to run any software, for any purpose, on your hardware.

                        Figuring out how to prevent accidental grants of that “freedom” is a big part of computing security, and ordinary real everyday average people who are at least as much The People™ as you very much really do really want to avoid accidentally granting it.

                        1. 2

                          Look, I don’t want to defend a position that is against security. I want to build actual security and that means somehow we need a way to assess whether to apply an automatic update without relying on a central root of trust. That is exactly what I have been trying to figure out (datalisp.is).

                          The coercive part is the foundation of legitimacy in our current world. I believe we can have a more persuasive foundation, indeed I’d argue that science used to be persuasive rather than coercive.. but with the education system being little more than a certification pipeline… we’ve reached the point where bureaucracy is the source of truth even in science! which is clearly absurd!!

                          Basically, I disagree with the premise of our society and I don’t see why we need to repeat the mistake in the digital world.

                          1. 1

                            assess whether to apply an automatic update without relying on a central root of trust

                            Much better to have 51% of a group of random strangers decide that my device ought to apply and run the software they want it to run, you mean? Definitely nothing “coercive” there.

                            Basically, I disagree with the premise of our society and I don’t see why we need to repeat the mistake in the digital world.

                            Well, people certainly are usually extremely successful with schemes which “only” rely on fundamentally changing the way all humans everywhere behave toward each other.

                            1. 5

                              Much better to have 51% of a group of random strangers decide that my device ought to apply and run the software they want it to run, you mean? Definitely nothing “coercive” there.

                              It’s odd that you keep making these appeals to security. Do you really believe for one moment that manufacturers locking down devices makes you more secure? If so, I’d love to sell you a locked-down Moto G6. I’ll take $100.

                              1. 2

                                I believe that I worry much less about random emails and text messages and web pages than I used to, and that this is directly attributable to the increasing “locking down” that you appear to dislike so much.

                                And while you may believe that you have a devastating counterexample, I do not agree that you do. Some amount of failures will occur, whether my preferred approach is adopted or yours is. But I am the one whose argument naturally and explicitly frames itself in terms of tradeoffs and of understanding that everything which has a benefit also has a cost, while those I find myself arguing with in this thread do not strike me as likely to admit that the benefits of their proposed approach would also come with significant costs.

                              2. 2

                                ?? Should I even respond to this?

                                1. 1

                                  Well, the alternative to “central root of trust” is distributed/trustless, which generally means some type of consensus algorithm. Maybe 51% isn’t the exact number, but the cryptocurrency world has demonstrated quite clearly that a “decentralized” and “trustless” system stays that way for all of about five seconds before a variety of forces, inherent to human interactions, force some type of recentralization and thus trust of one or at best a handful of powerful parties.

                                  And the history of people putting forward grand theories that depend on “we just need to change everything about how humans behave toward each other” is not, as I sarcastically meant to note, full of grand successes.

                                  1. 3

                                    … cryptocurrency people …

                                    So you just lump me in with them? A fundamental right is the right to choose who and what to trust.. I won’t take that from myself or anyone else when I design a secure system.

                                    … “grand theory” people …

                                    There are many examples of what we call “progress” happening after someone made an astute observation. All I do is regurtitate stuff I’ve learned elsewhere. You may think things will always stay the same but people are fond of saying that “the only constant is change”

                                    In all seriousness though, why not just respond to me and my arguments? why run away to face some easy to beat strawmen?

                                    1. 2

                                      It is a simple and easily-verifiable empirical fact that “decentralized” and “trustless” systems tend to fail directly, or to fail indirectly by re-centralizing. Pointing to the many recent high-profile examplesin the world of cryptocurrency is a mere helpful aid to your verification of this fact.

                                      The same history of failure is true – and once again this is an empirically-verifiable fact which I invite you to verify for yourself – of theories and philosophies which require complete reorganization of society or reorientation or modification of behavior of all humans.

                                      1. 5

                                        I routinely squash arguments based on the “if everyone … then …” kind of argument using similar rhetoric so I get where you are coming from but that is not the kind of argument I am employing.

                                        My goal is to offer better terms than currently available with an economic system that pays people for improving UX (or contributing to the commons) with the end result of out-competing the incumbents.

                                        I am not proposing a “trustless” system (wtf is that?) I am proposing a system where you get to choose who and what to trust, but since that is very cognitively demanding I am also making it easy for you to cooperate with others in making assessments, precisely so that non-technical users can relax without having to place all their trust in a opaque centralized behemoth whose existence gets progressively harder to justify as it adds layers of bureaucracy around the root of trust it is supposed to defend.

                                        1. 2

                                          I am also making it easy for you to cooperate with others in making assessments, precisely so that non-technical users can relax without having to place all their trust in a opaque centralized behemoth whose existence gets progressively harder to justify as it adds layers of bureaucracy around the root of trust it is supposed to defend.

                                          And I assert that all such things last at best a very short time prior to re-centralizing and re-growing the very “layers of bureaucracy” that the system was meant to get rid of, just this time in a slightly different form.

                                          This is inherent to the way humans interact with each other, and no amount of technology ever has or ever will “fix” it.

                                          1. 1

                                            Sure, that is acceptable as an emergent phenomena, not as a baked in assumption.

                                    2. 2

                                      Well, the alternative to “central root of trust” is distributed/trustless, which generally means some type of consensus algorithm.

                                      I don’t think that’s what anyone else is arguing for. I think people just want to be able to install a Linux distribution on their laptop and have the distribution maintainers be the root of trust (or create their own), rather than the laptop vendor deciding that Microsoft is the only acceptable root of trust. That’s possible today on most laptops (excluding Chromebooks), but only sporadically on smart phones (not all bootloaders are unlockable), and people are afraid of the laptop/desktop situation becoming as bad as the smart phone situation.

                                      1. 2

                                        The laptop vendor is probably just relieved someone is providing a standardized solution for something the market clearly does want. We were always going to end up with probably just a couple implementations of this functionality, and it was probably always going to be some big company serving as the “root of trust”. It’s being done in a way that still lets people run the operating system of their choice, and I’ve argued at length in other comments for why all the explicit or implied “well they haven’t taken that away from us yet” statements I’ve seen are misguided, as well as how the history and state of the market for both locked-down devices like tablets and phones, and for non-locked-down devices like desktops and laptops, is strong evidence “becoming as bad as the smart phone situation” is highly unlikely. Feel free to read through those comments for the details.

                    3. 3

                      Or, depending on your perspective, every new warning sign causes a cry of IMPENDING DOOM like this, and we keep ignoring them.

                      It is an easily-verifiable fact that the Free Software community has a history of significantly over-dramatizing security mechanisms in computing systems. For example, Richard Stallman’s own note that once was inserted into the documentation for GNU su, and which may easily be found online with the assistance of one’s preferred search mechanism, and which read:

                      Why GNU su does not support the `wheel’ group

                      (This section is by Richard Stallman.)

                      Sometimes a few of the users try to hold total power over all the rest. For example, in 1984, a few users at the MIT AI lab decided to seize power by changing the operator password on the Twenex system and keeping it secret from everyone else. (I was able to thwart this coup and give power back to the users by patching the kernel, but I wouldn’t know how to do that in Unix.)

                      However, occasionally the rulers do tell someone. Under the usual su mechanism, once someone learns the root password who sympathizes with the ordinary users, he or she can tell the rest. The “wheel group” feature would make this impossible, and thus cement the power of the rulers.

                      I’m on the side of the masses, not that of the rulers. If you are used to supporting the bosses and sysadmins in whatever they do, you might find this idea strange at first.

                      And this style of reaction has continued into the present day. The prose, and the claim to be standing for the oppressed many against the tyrannical oppressive few, is not too far off from contemporary samples which can be found in this very thread.

                      But the simple fact is that people are tired of being afraid of every email and text message they receive and what it might do to their computer. Tired of being afraid of what any random web page they visit might do to their computer. Tired of being afraid of just having a computing device turned on and internet-connected, lest it be taken over remotely, which remains a routine occurrence.

                      The only way to make malicious takeover of the system more difficult is by putting barriers in the way of anyone who would seek to take over the system, even if their purposes are not malicious. This has been the trend in many manufacturers’ computing devices and many operating system vendors’ software for many years now. So we see systems which have a “sealed” and cryptographically verified system volume. Or systems which default to sandboxing applications or restricting their access to the filesystem and sensitive APIs. Or systems which default to requiring cryptographic signatures from identified developers as a precondition of running an executable. And on and on – all of these have contributed to significant improvement in the average security of such systems in the hands of ordinary non-technical users.

                      And every one of them has also offered some type of toggle or other mechanism to allow a motivated and sufficiently competent user to override the default behavior.

                      Yet every one of them has been denounced by those who are of Stallman’s way of thinking. Every one of them, we have been assured, is, this time, the final step before the frog is boiled and the manufacturers finally stop providing a mechanism to get around the default behavior.

                      No sufficient explanation for why manufacturers would want to do this is ever provided. We are simply told that they are fighting some sort of “war” against some thing called “general-purpose computing”, and that the “users” must fight back. I find it extremely difficult to take such arguments seriously; they tend to require almost cartoonish levels of overt villainy on the part of manufacturers and vendors, and disregard factors like the known necessity of enticing developers to a platform in order to make it attractive to end users.

                      Nor is any explanation ever accepted of the tradeoffs inherent in providing a true general-use system which can serve the security needs of the vast majority of non-technical users while also not being too offensive to the sensibilities of the minority of extremely-technical users. Asking us to take on the burden of looking up how to turn off a tamper-proofing mechanism is a small thing compared to the pain and suffering that would be imposed on everyone else if all such mechanisms were done away with.

                      For this reason I do not accept and never will accept either the “impending doom” claims, nor the related claims of a “war” on “general-purpose computing”, and therefore I rebut any and all who advance such claims.

                      1. 5

                        I have rarely seen anyone rely as much on strawmen in a discussion.

                        We already have our thread so I responded there.. but I hope you can agree that it is a systemic risk to have a central point of failure? Even if it is just in the abstract. Sure there is no cartoon villain at the moment but why make space for one to appear? It is not necessary.

                    4. 11

                      I don’t think this is an overreaction, or something to become complacent about. microsoft has a history of anticompetitive behavior, e.g. trying to “nuke” existing installations of linux by replacing bootloaders on update, and IIRC in the early days of secure boot they originally didn’t want to provide any way for booting alternative OSes either by disabling secure boot or using alternate keys (but, again IIRC… they were kinda forced to at least on x86.)

                      1. 3

                        Fair enough.

                        So what would you suggest that people do? The only thing that comes to mind for me is to vote with your wallet and ONLY buy machines that use open standards.

                        That’s easier than it used to be, System76 sells CoreBoot based PCs.

                        1. 6

                          Yeah, pretty much that, since government intervention is completely out of the question (at least in the US).

                          System76, Framework, Purism, HP(!), Dell, are just some of the ones I can think of off the top of my head that are selling systems that run Linux, though only System76 and Purism are the only ones I know of that use coreboot…

                          1. 3

                            Is running Linux enough?

                            Like, what if MSFT cozied up to one of the distro owners like Ubuntu and bundled all the right magic bits so it booted on Pluton chips?

                            My point here is not to challenge what you’re saying at all, just that in my opinion vendor lock-in and user freedom are a sliding scale where everyone gets to choose their own comfort level.

                            I like fully open systems which is why I support System76 and have a Thelio desktop from them sitting next to me here.

                            However I also have a Lenovo laptop which doesn’t currently run Linux, not due to any boot level shenanigans but because of a bug in the wifi driver.

                            My wife has a thoroughly locked down M1 Macbook Air, because the computer is an appliance and she would literally drop into a coma if forced to deal with the details of installing Linux on any machine :)

                            1. 3

                              Is running Linux enough?

                              No, it’s not, but it at least (currently) demonstrates that microsoft hasn’t wrapped their tentacles around the OEM (yet).

                              My point here is not to challenge what you’re saying at all, just that in my opinion vendor lock-in and user freedom are a sliding scale where everyone gets to choose their own comfort level.

                              That’s a good point. I just don’t like to see companies like microsoft impose hard limits to how much you can slide on the scale. If folks want a microsoft appliance, then microsoft is already an OEM (the surface stuff), there’s no reason to impose restrictions on all OEMs that ship windows.

                      2. 2

                        Anyone remember Palladium?

                      3. 6

                        I was discussing with a fellow developer recently how much of a threat Microsoft is in the future and the discussion went to their current push to acquire a lot of game development companies, and also some AR and VR tech. This article rings a lot of alarm bells for me as a gamer and as a game developer.

                        Here is how I see it going down:

                        Microsoft makes anti-cheat systems easier. Most AC companies adopt these standards and possibly Microsoft releases an in house AC system as well. Competitive e-sports organisations start requiring these systems as a condition of participation. In order to play competitive games you need a compliant PC running windows (this is already the case, none of the AC systems I know of work on Linux). Because gamers have to have a compliant PC running windows for many of the most important games they play, they tend to exclusively use Windows to play games. The commercial games market migrates back to Windows, halting the current trend of Linux being a powerful gaming platform.

                        Obviously Cataclysm:DDA is not going anywhere and I am eternally grateful for the amazing devs that are making that, but the really profitable games will get sucked back into MS orbit, and gamers’ freedom to take control of their own machines and the software on them will be impacted.

                        1. 3

                          https://areweanticheatyet.com/

                          Several anti-cheat options have sported support on Linux as Valve has pushed developer to not lock in their players to a singular OS.

                          This has very much seemed to be Valve’s long-term strategy: as Microsoft marches towards lock-in and using its store as the only ‘safe’ place to get application where Steam itself could be banned, Valve has put effort into getting gaming on Linux in a good state. This started with the Steam box and showing Linux platform support in their Linux-compatible GUI, and has finally moved to dedicated hardware in the Steam Deck. It would seem this is a hedge against a Microsoft Store monopoly, like we see in mobile OSs, and as gamers have seen consoles as a dedicated gaming box, they’d be okay with Steam OS.

                          Is Valve doing this out of the kindness of their heart? Of course not–they take a huge cut to distibute games on its Steam platform. But not being publicly-traded, they can do a bit of column a and b as profit doesn’t have to be the first and only goal. That 30% cut has hired many Linux engineers, developed Proton and its ease of use alongside the WINE community, and gotten the R&D and scale to release a very cheap, portable PC that has had no interest trying to hide that you can leverage the whole OS behind the Steam façade (compared to Microsoft, Nintendo, Sony who are very much anti-“ownership” doing what you want with a device).

                        2. 6

                          See also OpenTitan which I think has similar security goals but without Pluton’s Windows lock-in. I’m not familiar enough with either to know how much they’re similar/different.

                          1. 15

                            [ Disclaimer: I work for Microsoft and have collaborated with the Pluton team on some things, but am not involved in the push to put Pluton everywhere and only talk to the Windows team occasionally. ]

                            I am familiar with Pluton and have skimmed some of the OpenTitan docs and the docs of some hardware roots of trust from other vendors. They are all very similar: they provide some key management and hardware crypto functionality as fixed-function units and a general-purpose core that’s tightly coupled and provides some more user-friendly (or, in the case of the TPM, standard but pretty awful) APIs for the rest of the system. Pluton has a fairly rich policy language for keys (for example, allowing a key to be used only as input to a KDF and then imposing policies on what the derived key can be used for) and some neat mitigations against physical attackers that, apparently, I’m not allowed to talk about (any time you talk about a particular security defence publicly, it motivates a load of people like @saaramar and his friends to go and try to break it), but it’s not massively different from any of the alternative implementations.

                            A hardware RoT is basically table stakes for a modern computer now. Apple has one, Android phones have them. TPMs have been around for a while, but they generally fall into two categories of suckiness:

                            • Firmware TPMs are implemented as firmware on the main core. They run in a higher privilege level, but they share all of the execution resources with the main system. TPM operations trigger SMC calls or similar. These are often vulnerable to side channels that allow the keys to be exfiltrated. If the goal is to keep the user’s keys (for WebAuthn and so on) safe from a compromised OS, this is a big problem.
                            • Discrete TPMs are separate chips that are connected to the motherboard. A lot of these just plain suck from a reliability perspective. If a user encrypts their disk with BitLocker and the TPM dies, then they’re stuck if they haven’t properly backed up the recovery keys. Users complain about this a lot. The other problem with discrete TPMs is that they’re connected by writes on the motherboard and so they’re very easy to lie to. An attacker who stole a laptop can easily boot an OS that is allowed to access the disk encryption keys, record everything that the CPU says to the TPM, then boot another OS with the TPM disconnected, replay the messages from the CPU, and unlock the keys.

                            This means that, for security, you really want a separate core (so isolated from side channels) that’s on package (so hard to physically attack without destroying it). Apple and Google both know that, which is why they put such a thing on their devices. Both Google and Apple have a lot more control over their respective hardware ecosystems than Microsoft, so can do this much more easily.

                            I strongly suspect that if Intel and AMD had built decent (secure, reliable) on-package TPM implementations then there wouldn’t have been so much of a push for Pluton.

                            1. 14

                              Both Google and Apple have a lot more control over their respective hardware ecosystems than Microsoft, so can do this much more easily.

                              How about considering that it’s bad for any one company to have complete control over an ecosystem? It’s good that microsoft feels left out for not controlling the PC ecosystem. It’s bad that google and apple dictate what users can and cannot do with their devices.

                              1. 3

                                One of the things I enjoy about your posts is that you’re an ardent advocate for freedom and open-ness in computing but you seem to be reasonable about it, so here’s a question I hope you’ll read in the spirit it was meant rather than an attack:

                                What would your ideal solution look like in this space? Do you think it would be possible to implement solutions LIKE this in broad concept (a verifiable chain of trust from boot) but that were vendor independent?

                                1. 3

                                  heh, thanks :)

                                  This is a really good, and fair, question! I’ve thought about this a fair amount, but I’m definitely not an expert and am easily confused by the many acronyms e.g. from the article. Anyways, from what I can tell, I think having an extra chip, etc is fine. An ideal solution in this space might be something similar to what they are pushing, but treats the 4 user freedoms[1] as a first class citizen. Like, I understand that you don’t want bad actors to be able to replace keys or whatever, but it shouldn’t be impossible to do that, and microsoft shouldn’t be the gatekeeper. I understand that you don’t want ‘tampered’ devices to join your network or play your game (because, omg cheaters!!…), but the mechanism used to verify that should allow for exceptions where employees / students / users can use other sysadmin/IT/department-“approved” operating systems, not whatever microsoft says is “trusted’.

                                  This pluton thing seems to run non-free firmware, with 0 chance of me or anyone else being able to build fw for it and use it. The drivers and whatever userspace components required for this thing also seem to be non-free, and windows-only. And if microsoft kills the 3rd party CA for secure boot, then it’s suddenly impossible (I think?) to boot anything else but windows. Pluton is 100% microsoft / windows centric, so if it works with anything outside of their products then it’s a bug / coincidence, basically.

                                  Maybe I’m being overly cynical, but this seems like start of the “Extinguish” phase of EEE… Microsoft: “you don’t need to install Linux, *BSD, whatever anymore, you can run the same userspace under Windows with WSL now! So no one should have a problem with these changes!” OEMs: “Yeah!”

                                  Anyways, I can’t really go into any specific about how I’d come up with something better, since a lot of the technicalities are waaaay over my head. My main beef with this is it’s microsoft doing what microsoft has always done for the last 30+ years, non-microsoft customers be damned. Thanks for the message though, I want to keep thinking about your question, because it’s spot on… this pluton stuff does attempt to address some real problems (though I’d argue that combatting game cheaters by throwing away user freedoms is not a real problem/solution), and folks are not going to easily dismiss pluton if the alternative is “do nothing” about the real problems it does attempt to address.

                                  1. https://www.gnu.org/philosophy/free-sw.en.html#four-freedoms
                                  1. 1

                                    Honest question: Other than the Pinebooks and the System76 machines, how many computers buyable by consumers on the market today actually meet these criteria?

                                    The Lenovo laptops many Linux fans prize have proprietary binary blobs all through them as far as I understand.

                                    I love the principles you’re citing here, I’m just curious how pragmatic it is for many people to actually live by them.

                                    1. 3

                                      I’m replying to you now on a Librem 14 laptop, which runs coreboot and has had the ME “nuetered”. The CPU is an Intel skylake variant (coffee lake I think?), because I believe later CPU generations require even more non-free firmware and I don’t think Purism has figured out how to proceed there. There’s also the Framework laptop (and recent announcement from HP), but those run more non-free blobs. And I think Dell is still selling their XPS 13 with Linux pre-installed. But as I mentioned in the Pluton article comments, being able to install Linux isn’t really helpful for promoting/realizing the 4 freedoms and such. On the bright side, there are so many laptops shipping with Linux today than I ever remember in the past. On the other hand, this may be the peak of the “golden age” of having multiple choices for an out-of-box Linux system :(

                                      Ya, the situation now is becoming less and less ideal. And the “free software or bust” community isn’t big or strong enough to counter this movement. We need legislative action to help.. guide chip factories and OEMs, which (IIRC you’re in the US), isn’t going to happen here :P

                                      1. 2

                                        Exactly.

                                        I feel like the only REAL thing we can do other than shaking our fists and venting on the various forums is vote with our wallets and try to convince others to do the same.

                                        1. 2

                                          groans

                                          We can build software and compete with these clowns. The chips are coming (it takes a stupidly long time to go from idea to product in the chip world) and we’ve got to work on software distribution models that are democratic and can be trusted. I feel very ignored.

                                          1. 1

                                            Great! Sincerely, I would love love LOVE to see this happen!

                                            The problem I see is that the way we currently allocate resources in a capitalist society is to put dollars towards engineering hours.

                                            Volunteers can move mountains, but at the end of the day even the most virtuous free software advocate has to keep a root over their heads and feed themselves.

                                            It’s a hard problem.

                                            1. 2

                                              Yeah exact. We need to pay people and we can only do it by making up our own money… only way people will accept this money is if it is perceived as legit (i.e. has to be persuasive and that can only happen if enough people are defending the definition).

                                  2. 2

                                    HEADS is already that to some extent, you can already have a nitropad (but yes this goes further than that by having the chip in the processor and hiding the keys better but in principle it is the same, someone already mentioned [in this thread] how you could maintain a HEADSy model with this kind of tech … personally I think there are other attack surfaces to think about before over committing to this aspect).

                                2. 13

                                  A hardware RoT is basically table stakes for a modern computer now.

                                  This is not a universally-held opinion, especially given the inability to independently verify the correctness of such hardware. TPM manufacturers have not been forthcoming with the community.

                                  1. 7

                                    Just because Apple and Google have monopolistic control on their devices, does that mean Microsoft does too? I agree with the OP, that contributing to an open, libre platform, would garner more trust and transparency and not let Window Update be the arbiter for changes to the unit.

                                    While the article can be seen a bit as a slippery slope, the thought exercise is valuable to consider what could happen and I don’t see a good reason why we should trust what the vendors are doing. I recently purchased a laptop and while a coworker in the EU could buy his device without Windows, my region had no such option. If these features are in the future a requirement to ‘use’ the device, I sure as heck better be able to opt out of Windows—and not just at a checkout, but after buying a used device as well. Just as I wish it were less of a hassle to set up a de-Googled custom ROM of Android, I want the laptop/desktop space to remain ‘hackable’ for the consumer.

                                    1. 4

                                      Just because Apple and Google have monopolistic control on their devices, does that mean Microsoft does too?

                                      No, just because a hardware root of trust is an absolute minimum security posture for all competing devices means that Windows devices should provide one too.

                                      If these features are in the future a requirement to ‘use’ the device, I sure as heck better be able to opt out of Windows—and not just at a checkout, but after buying a used device as well. Just as I wish it were less of a hassle to set up a de-Googled custom ROM of Android, I want the laptop/desktop space to remain ‘hackable’ for the consumer.

                                      Pluton-enabled devices are far more friendly than Android devices in this regard. You can toggle a configuration switch to use the other root cert and then there’s a process that’s used by a load of Linux vendors to get their copies of GRUB signed and to boot Linux with a full secure boot chain. If you boot from a dm-verity root fs, then everything in the root filesystem is similarly protected. Pluton then exposes TPM 2.0 APIs to Linux, which can then use the secure boot attestation to authorise disclosure of keys for LUKS and mount a dm-crypt (+dm-integrity)-protected mutable filesystem.

                                      1. 4

                                        Secure according to which measure though? I should be able to detach my storage and mount it on another machine to read and repair it if I know my keys. How do I get these TPM keys if it’s in the black box on the device I own (besides side channel attacks)? Even if I could do this through LUKS or whatever, do I want to? LUKS or a filesystem’s entryprion already provides me pretty good encryption and I know who and what generated the keys and where they live because I did it when I formatted my drive. Pluton’s a “chip-to-cloud security vision” sounds like complexity in that pipeline that opens me up to a different vector of issues.

                                        When you couple Pluton with Smart App Manager (forgot the name) doesn’t this allow Microsoft to be the arbiter of what apps are good/bad and what it considers safe/compromised (like the issues Android users can have with SafetyNet if they want a custom ROM or root access to their purchased device)… and its store to be the eventual final ‘trusted’ space to get apps just like the Apple and Play Stores?

                                        I know this is just a flurry of questions and I don’t think it’s fair you need to play spokesperson, but TPM was very unpopular and now it’s a requirement to upgrade—and Pluton is disabled by default by Lenovo and Dell, but why if it’s so safe? Who’s to say users want this? I can disagree but understand why businesses would, but I don’t understand how this should just be accepted as a good thing for personal and private users to not get the keys to their own device. I can have a paper backups in a fire safe for most other forms of encryption but I can’t for TPM?

                                        1. 8

                                          I should be able to detach my storage and mount it on another machine to read and repair it if I know my keys. How do I get these TPM keys if it’s in the black box on the device I own (besides side channel attacks)?

                                          I’m not a LUKS expert, but I believe that it stores a block that contains the disk keys, encrypted with a key on the TPM. The TPM will decrypt this block and then the kernel does all of the crypto with keys that it knows for normal operations. It will also spit out a recovery key, which is the decrypted on-disk key and lets you mount the disk on another system.

                                          On Windows, for domain-connected machines, the BitLocker keys can be automatically stored in active directory, so that your IT folks can unlock the disk but thieves can’t (assuming no BitLocker vulnerabilities). I don’t know if Red Hat or Canonical provide something like this for LUKS, but it wouldn’t be too hard to build on an LDAP server.

                                          LUKS or a filesystem’s entryprion already provides me pretty good encryption and I know who and what generated the keys and where they live because I did it when I formatted my drive

                                          And how do you enter them on boot? You need either an external key (stealable along with your laptop) or you remember a long pass phrase.

                                          How do you know that the kernel that you’re entering the passphrase into is really the Linux kernel that you trust? Without a secure boot chain, someone who briefly has physical access would be able to replace your kernel or GRUB (whatever is not on the encrypted disk, which you use to mount the encrypted FS) with one that will store the key somewhere insecure for when they steal the machine.

                                          When you couple Pluton with Smart App Manager (forgot the name) doesn’t this allow Microsoft to be the arbiter of what apps are good/bad and what it considers safe/compromised (like the issues Android users can have with SafetyNet if they want a custom ROM or root access to their purchased device)… and its store to be the eventual final ‘trusted’ space to get apps just like the Apple and Play Stores?

                                          You don’t need a TPM for Windows (or any other OS) to decide whether to run a signed binary or not. Pluton changes nothing here.

                                          Who’s to say users want this?

                                          At least for corporate customers, our market research data does. Home users also like things like Windows Hello, which requires a secure credential store to support WebAuthn and allow you to sign into web sites without needing a password. I, personally, like knowing that my GitHub login credentials (for example) can’t be exfiltrated by a kernel-level compromise on my machine. I like knowing that if someone steals my computer, they don’t get access to my files. And I really like that this is now becoming a baseline security standard and so I get the same guarantees from my Mac, my Windows machine, my Android phone and my iPad.

                                          1. 2

                                            IT folks can unlock the disk but thieves can’t

                                            I can’t say I trust IT folks or keys on any server that isn’t mine. At this point I don’t know that I could work with an employer where it’s not BYOD, so unsure if this overlaps with me. I could maybe understand no one having access to the private keys, but it sounds like someone does and that someone isn’t me.

                                            And how do you enter them on boot?

                                            I have a long arduous password, and I’m pretty fine with this. It written down in a safe place too. I’m not okay with this key being in the black box that connects to a server.

                                            With NixOS though, the encryption of a lot of the device is irrelevant though and actively harmful to encryption since the machine becomes so stateless that an attacker could work backwards to figure out private keys given so many things are reproducible to an exact state (so things not in /home, /var, similar aren’t encrypted). I’d be curious how well a system with a general attack would handle the Nix store needing to be a certain way to boot or not–not anywhere near an expert at this level of the machine.

                                            You don’t need a TPM [..] run a signed binary

                                            But Pluton can help act like SafetyNet, no? And how I ended up switching banks after they no longer let me use MagiskHide because my device should be mine and if I want root to install some privacy apps and kick out parts of Google, it’s not my bank’s business.

                                            Windows Hello … guarantees [..] from my Android phone

                                            These aren’t things I generally want or care about–nor do I want to trust some AI’s facial recognition algo nor the internet connection and Microsoft account requirement for setup. Some passwords are in my head, but most things are behind FIDO2 or TOTP 2FA–both of which do a decent job with the password situation without involving that black box or having a single point of failure. My phone even de-Google’d often times feels more like a kiosk than any other device I’ve had. If Linux support was just a little better, I’d drive that instead too.

                                            At least for corporate customers, our market research data does

                                            Meanwhile at The Register: https://www.theregister.com/2022/03/09/dell_pluton_microsoft/

                                            Dell won’t include Microsoft’s Pluton technology in most of its commercial PCs, telling The Register: “Pluton does not align with Dell’s approach to hardware security and our most secure commercial PC requirements.”

                                            Says that Pluton seems off to not just the Linux base, but OEMs too. Microsoft having no concern about Dell & Lenovo seems a bit odd.

                                            1. 3

                                              I think it’s odd that you speak in such abolutist terms about your “ownership” of your devices, and your refusal to let anyone else ever compromise your “ownership” by setting out terms on what you can or can’t do, but every one of your examples actually consists of you demanding access to other people’s devices (well, their services, which is the same thing because those services run on their devices) and you demanding the right to dictate to them the terms on which you will receive that access. Do they not have the same rights of “ownership” over their things as you? Do they not have the same right to set terms of their choosing and tell you that you don’t “own” their devices?

                                              1. 5

                                                When the service they’re offering is access to something I own, I at least would agree that they have an obligation to let me shoot myself in the foot if that’s what I really want. Show a warning about installing on a rooted device, sure - but don’t go on to block access. For most of us, a bank isn’t really exposed to any meaningful risk if I install their app on a rooted device - only I am, because it’s my banking info that’s being exposed to other malicious apps on the device. I didn’t see any other examples of access to things that companies own in that post, unless you’re arguing that it is Google’s phone (which may be the case in practice but certainly shouldn’t be).

                                                EDIT: If Pluton had some support that allowed the user to control its decisions I think it would be a lot more comfortable. It wouldn’t have to go through the OS, since obviously that’d just regress to square one. It wouldn’t have to be convenient, either, since it should be a pretty rare case that you need to do it. It probably should be pretty cheap compared to processors themselves.

                                                I’d be happy to have some peripheral you plug the chip into during build, and need to enter a code that came with the chip’s manual from the manufacturer, at which point you have to change the code. Chip won’t boot unless the code’s changed. If you don’t care, you just do that and then plug it into the socket as normal; otherwise you can edit the roots of trust freely and be on your way until the next time you want a change, in which case you have to go through the ordeal of unseating the processor from your motherboard and doing it again.

                                                1. 1

                                                  I at least would agree that they have an obligation to let me shoot myself in the foot if that’s what I really want. Show a warning about installing on a rooted device, sure - but don’t go on to block access.

                                                  Again: they’re not obligated to give you access to their systems. Remembering that the example cited was a bank, the user has plenty of other options besides using their rooted mobile device where the bank can no longer trust that, for example, its own app is being run unmodified. They can almost certainly still access via a web browser (which is inherently a less trustworthy environment and thus one the bank is less likely to restrict as much as app access), or call, or go in person to a branch.

                                                  And by saying that they should still provide access to their systems you are still effectively claiming the right to dictate to them how they will use what they own. Which is what the poster I replied to was saying they would not allow anyone else to do to them. The position thus remains inconsistent.

                                        2. 3

                                          Pluton-enabled devices are far more friendly than Android devices in this regard. You can toggle a configuration switch to use the other root cert […]

                                          Currently, yeah. At least the Lenovo devices people were complaining about did. But I’m pretty sure this is up to the vendor, right? Just like whether the bootloader on an Android device is unlockable or not.

                                          1. 6

                                            No, supporting the alternative root is a requirement for certification from Microsoft.

                                            1. 8

                                              I am sure that that requirement is antitrust CYA, but I’m still happy y’all have it. :)

                                      2. 2

                                        Thank you very much for this response. It’s super refreshing to see someone address the actual technology aspect.

                                        Is there any reason Linux distros couldn’t run on Plutonium machines if, for example, distros worked with MSFT in a similar way to what they already do to get their keys signed for SecureBoot?

                                        Pardon if it’s a silly question, my understanding of the technical details in this space is tenuous at best.

                                        1. 3

                                          Is there any reason Linux distros couldn’t run on Plutonium machines if, for example, distros worked with MSFT in a similar way to what they already do to get their keys signed for SecureBoot?

                                          Absolutely none. By default, Pluton contains two public keys for SecureBoot: One used by Microsoft to sign Windows and one used to sign other bootloaders. Linux distros can use this GitHub repo to provide a GRUB shim to sign. That shim will then include the public key that they use for signing their kernel and so on and so lets you establish a complete secure boot chain.

                                          The difficult thing at the moment is to provision a personal or per-org key. The thing I’d like is to be able to install my own public key so that the device will boot only a kernel that I, personally, sign and so I can compile my own custom kernel with my favourite configuration options, sign it, and boot it. A process for doing that securely is fairly hard.

                                    2. 5

                                      Can someone ELI5 me what these security changes are designed to protect against? What are the kinds of attacks and exploits that we need things like Pluton to stop?

                                      1. 9

                                        Here goes:

                                        “” Bad people sometimes put bad things into other people’s computers that they don’t want. They can do that over the Internet, like when they trick you into downloading something bad, or by actually putting it into your computer while they’re using it. Right now, it’s hard to know if any bad things are inside your computer, because the bad people make the bad things smart and sneaky, so they can hide.

                                        With these new computers that have a special piece inside called Pluton, the bad things can’t hide as easily. It’s like the computer is a castle, and instead of keeping the drawbridge down so anyone can come in and out, now the bridge is up and the castle is closed. If you want to come inside, or bring things inside the castle out, you have to talk to a guard. That guard has a list of names and secret codes for people who are allowed into and out of the castle. If you aren’t on that list, they won’t lower the bridge for you!

                                        This is good sometimes, because there are sneaky people that want to get inside and steal things. It is also bad sometimes, because the guards can be confused, or the people who give them the list might not like some other people or want them inside the castle, even if what they want to do isn’t to steal things or harm anyone. “”

                                        1. 6

                                          It’s also like buying a castle but not getting any say in whether or not the drawbridge is down, because it’s controlled by a for-profit gatekeeper who thinks they know better than you about what you should allow in/out of the castle.

                                          1. 4

                                            sighs okay I brought that on myself. Can someone ELI”mostly a generalist software developer with professional experience in webdev and modeling distributed systems, but doesn’t know anything about security” what these security changes are designed to protect against?

                                            1. 3

                                              These changes let you reliably detect if anything in the boot chain was modified, and if it was, to keep the secrets away from such modified software. Having this inside of a CPU with mitigations against hardware sidechannels and separated from main CPU cores makes bypassing this extremely hard even with a prolonged physical access. What this doesn’t help against is kernel or BIOS bugs, but at least it can give you reliable information on the version of them.

                                          2. 2

                                            The biggest thing is attackers with physical access (stealing your laptop for example).

                                            1. 1

                                              @rcoder took the “LI5” bit REALLY seriously, for which I give him credit :)

                                              However here’s a, shall we say “Explain Like I’m 10” response.

                                              Bootblock and firmware malware.

                                            2. 7

                                              I could not follow why all this is bad. Does it make it impossible to install Linux on any computer with this chip?

                                              There were statements in the article that made it sound to me like the author considered it a problem that the chip made it harder to pirate copyrighted works i.e stealing content from creators, which didn’t sound right to me.

                                              1. 6

                                                According to an MS employee commenting on this (@david_chisnall), it’s a certification requirement from MS to support alternative root certs, which allows Linux distros to be installed and booted with a full secure boot chain.

                                                I found the article to be fairly unclear on why Pluton is bad, as well; the only “bad” thing is that it theoretically will make it easier to prevent software piracy and cheating in online games. Which doesn’t seem… bad, to me?

                                                1. 4

                                                  Yup there were a number of red flags for me as well.

                                                  I’ll believe that Linux really is impossible on these chips when we see systems in the wild in the hands of skilled hackers :)

                                                  1. 4

                                                    As I understand it, Linux on Pluton PCs will be about as available as, say, LineageOS on Android devices. If the vendor doesn’t allow unlocking the bootloader, you’re probably out of luck.

                                                  2. 7

                                                    “Pirating copyrighted works” is an unavoidable side effect of general-purpose computing. I’d rather not throw out general-purpose computing in order to appease Disney and Time-Warner.

                                                    1. -2

                                                      While it may sound easier on the conscience to think that this is a Robin Hood (The English myth, not the trading platform) kind of situation, at the foundation it isn’t. It’s indie authors getting their books stolen, it’s actors getting lower revenue because of lost viewership. It’s singers not getting royalties.

                                                      If you are against big publishing/producing houses making money, support Indie artists. Making it easier to steal their works is not a solution.

                                                      1. 12

                                                        It’s singers not getting royalties.

                                                        The biggest thieves of royalties are arguably the music industry.

                                                        Making it easier to steal their works is not a solution.

                                                        You can’t “steal” a work. You can create and distribute reproductions without permission or attribution and fail to pay royalties, but my possession of a song doesn’t exclude your access to it. The “theft” framework is a meme perpetuated almost entirely to the benefit of publishers and rights-holders (not artists!) in order to legitimize incredibly shitty and abusive behavior.

                                                        support Indie artists

                                                        I do, and I even pay for my tracks! But a lot of those artists I’m only aware of due to running across their music under circumstances where perhaps the licensing wasn’t as well audited as it could be.

                                                        1. 2

                                                          I have bought more indie music on Bandcamp in the last two years than I bought any music in the previous twenty. But a lot of that is because they offer DRM-free tracks in your choice of formats, generally at a very reasonable price. That’s not something you can say of, for example, the movie industry.

                                                          1. 2

                                                            I’ve known quite a few musicians and they had huge pirate music collections.

                                                          2. 6

                                                            I don’t see how locking down computers to such a level helps artists, especially independent ones, and as a consumer I hate buying DRMed content and avoid it where I can.

                                                      2. 3

                                                        Gotta confess that a lot of this flew over my head a bit, but:

                                                        The system is tamper-resistant and constantly updated, meaning that should a strict MDM policy be in place, extracting documents from a system without authorization could be potentially extraordinarily difficult to impossible.

                                                        Am I missing something or is this kinda overlooking my ability to take pictures of the screen with a phone?

                                                        1. 2

                                                          How long would it take you to exfiltrate a 2mb text file by taking pictures of the screen?

                                                          1. 4

                                                            So document exfiltration via pictures is proooobably not a threat model that Pluton is seriously meant to address but this is a question that’s actually way cooler than expected.

                                                            Way back when Android was new and cool, me and an ex-colleague from uni wrote a program that did more or less that, mostly as an exercise in how pointless DRM is and to learn how to use the camera API. 2 MB of ASCII text are 1048 xterm‘s@80x25 worth of text. Factoring in redraw slowness, plus the low frame rate of early Android phones, we found we could reliably extract about 3 of those per second, so it would take about 6 minutes to get the whole thing out. 3 per second doesn’t need a paging method smarter than “have someone press page down but not too quickly”.

                                                            Nowadays, factoring in the higher frame rate of a phone, but also the inherent flicker (the camera’s 60 fps sounds amazing but the screen is still 60 Hz) and the fact that text isn’t instantly redrawn on the screen, I think you can get to 10-15 “pages”/second so you could get it all out in less than two minutes. Getting that kind of paging speed is trickier though. But, with paging via Page Down and a good keyboard, you could probably get to 5-6 pages, so about three minutes, which still isn’t bad. We didn’t really care about OCR back then but I don’t think it would be a problem today.

                                                            That’s assuming it’s all ASCII text, of course – 2 MB PDFs sometimes turn out to be like 100 pages. Based on my experience skipping classes and learning from other people’s notes, I can tell you it took me about two minutes to exfiltrate 100 written pages with a handheld camera :-P.

                                                            It’s worth noting that in corporate-ish environments the exfiltration story is also approached from another angle. Sensible documents are often extracted by people who otherwise have access to them for as much as they need, but are barred from sharing them with the outside world. More often than not they don’t actually need to exfiltrate the whole document, they only need to snap pictures of the relevant parts. They can also snap two or three photos a day.

                                                            1. 1

                                                              You kid (I think) but I would love to read an in-depth blog post about this. Big screen + scroll quickly + take a video + OCR = ???

                                                              Would be fascinating because it would touch things like

                                                              1. How fast/slow do I need to scroll to show all of the text? I.e. to make sure that some text is not lost between frames
                                                              2. Can you should a full new block of text every frame or does there need to be some overlap between blocks?
                                                              3. Does it matter that frames by the screen & the frames captured by the camera will be out of sync?

                                                              etc

                                                              1. 3

                                                                Depends on the software, but if you set the view to single page, you just have to hold the phone and press page down at a steady rhythm while taking a video and that should do the trick, I recon.

                                                          2. 0

                                                            None of this dystopian shit is inevitable. Please wake up.