This looks really nice! I tried Guix recently and thought the CLI, documentation, and language (Guile) were really nice and approachable. I hit some problems defining a new package, but I’ll probably have another go now with this tool.
One of the issues was working out what imports to add for definitions that guix import generates, but this UI seems to solve that. Another was that the licenses have unusual GNU names like “Expat” for MIT, and “ASL 2.0” for Apache 2.0 which seems to have carried over to this UI. Maybe the SPDX identifiers could be added as behind-the-scenes tags for the dropdown’s search somehow?
The only other major barrier for contribution I can see is trying to navigate GNU Savannah and sending email patches, but maybe others find those easier than I do.
I’ve never really used Savannah, but if you’re submitting patches to any email based project like guix and don’t want to figure out the CLI, you can use https://git.sr.ht which has a GUI for sending email patches from branches you push
What is really needed, he [Linus Torvalds] said, is to find ways to get away from the email patch model, which is not really working anymore. He feels that way now, even though he is “an old-school email person”.
Am I having a fever dream? I hope they choose Forgejo over GitHub or other proprietary Git forges.
What might happen is that they end up using a variety of forges and many different repos. Today, Linux already has many repos, and maintainers apply email patches to their own repos, then higher up maintainers pull changes from those repos. Today, parts of the DRM subsystem are already hosted on the freedesktop.org GitLab. Other parts of Linux could do the same.
GitHub’s PR system relies on git. A system on top of git could be built where the ‘patcher’ client specifies the url of their git repo and branch and gets back a uuid for the ‘patch’ uuid. System server automatically attempts to git merge with a rebase strategy. If it fails, the patcher must resubmit (even a different url and branch) with conflicts resolved. Simply git pushing to the default branch on GitHub will close the pull request. The system knows the remote, so happens automatically. This can be applied across code hosts. The system has a rule to only merge rebased branches with no conflict.
Hypothetically, if codehosts are added (hopefully by following some api of the system), a project could have a unified system across code hosts, not caring whether it’s a GH PR or a patch-rebase email (abstract it into a branch model). It could even populate PRs on GitHub, or other platform, regardless of how the patch originated.
I’m sort of working on this secondarily. I’ve proven it works with scripts.
As a data point, I tried both Sourcehut and Codeberg (forgejo) last week and thought both were really very well made.
I don’t think Sourcehut’s email workflow works for me personally though, and Codeberg’s CI isn’t quite finished. Sourcehut’s CI was probably the best I’ve ever tried, so my ideal right now would be Forgejo using builds.sr.ht for CI.
Maybe in the future federation will make the boundaries between forges thinner and projects can use multiple at once, and also contributions to non-GitHub projects may increase.
We’re working on the sunburst boards/platforms at lowRISC. Maybe some of the labs and demos that come from that will work with the ASIC too depending on what’s in the package?
splitting commits in an interactive rebase is hard
Hah! That is a solved problem that deserves to be better known.
git commit -am "Staging area, hold my beer as I commit & split this"
git revise --cut HEAD -- file.{c,h}
I’m a big enthusiast for git-revise, but the maintainer has been busy for a while. There are open PRs, including for the filename support as shown. Shameless plug: I have this branch of commits that I hope to upstream (I don’t want to fork it, as I don’t want to take the project in any new direction).
That’s very cool! I find editing hunks quite difficult, so my technique for splitting a commit is:
edit the commit I want to split in git rebase -i
Start deleting things until everything looks like how I want my pre-split commit to look
git commit --fixup HEAD those changes
git revert HEAD to make another commit which adds those changes I just deleted (the post-split commit)
Squash the fixup commit and reword the revert commit’s message in a second rebase
Being able to edit and test code as normal in step 2 is really helpful. That’s really hard to do when you’re trying to define your cut using unstaged/staged hunks!
See also OpenTitan which I think has similar security goals but without Pluton’s Windows lock-in. I’m not familiar enough with either to know how much they’re similar/different.
[ Disclaimer: I work for Microsoft and have collaborated with the Pluton team on some things, but am not involved in the push to put Pluton everywhere and only talk to the Windows team occasionally. ]
I am familiar with Pluton and have skimmed some of the OpenTitan docs and the docs of some hardware roots of trust from other vendors. They are all very similar: they provide some key management and hardware crypto functionality as fixed-function units and a general-purpose core that’s tightly coupled and provides some more user-friendly (or, in the case of the TPM, standard but pretty awful) APIs for the rest of the system. Pluton has a fairly rich policy language for keys (for example, allowing a key to be used only as input to a KDF and then imposing policies on what the derived key can be used for) and some neat mitigations against physical attackers that, apparently, I’m not allowed to talk about (any time you talk about a particular security defence publicly, it motivates a load of people like @saaramar and his friends to go and try to break it), but it’s not massively different from any of the alternative implementations.
A hardware RoT is basically table stakes for a modern computer now. Apple has one, Android phones have them. TPMs have been around for a while, but they generally fall into two categories of suckiness:
Firmware TPMs are implemented as firmware on the main core. They run in a higher privilege level, but they share all of the execution resources with the main system. TPM operations trigger SMC calls or similar. These are often vulnerable to side channels that allow the keys to be exfiltrated. If the goal is to keep the user’s keys (for WebAuthn and so on) safe from a compromised OS, this is a big problem.
Discrete TPMs are separate chips that are connected to the motherboard. A lot of these just plain suck from a reliability perspective. If a user encrypts their disk with BitLocker and the TPM dies, then they’re stuck if they haven’t properly backed up the recovery keys. Users complain about this a lot. The other problem with discrete TPMs is that they’re connected by writes on the motherboard and so they’re very easy to lie to. An attacker who stole a laptop can easily boot an OS that is allowed to access the disk encryption keys, record everything that the CPU says to the TPM, then boot another OS with the TPM disconnected, replay the messages from the CPU, and unlock the keys.
This means that, for security, you really want a separate core (so isolated from side channels) that’s on package (so hard to physically attack without destroying it). Apple and Google both know that, which is why they put such a thing on their devices. Both Google and Apple have a lot more control over their respective hardware ecosystems than Microsoft, so can do this much more easily.
I strongly suspect that if Intel and AMD had built decent (secure, reliable) on-package TPM implementations then there wouldn’t have been so much of a push for Pluton.
Both Google and Apple have a lot more control over their respective hardware ecosystems than Microsoft, so can do this much more easily.
How about considering that it’s bad for any one company to have complete control over an ecosystem? It’s good that microsoft feels left out for not controlling the PC ecosystem. It’s bad that google and apple dictate what users can and cannot do with their devices.
One of the things I enjoy about your posts is that you’re an ardent advocate for freedom and open-ness in computing but you seem to be reasonable about it, so here’s a question I hope you’ll read in the spirit it was meant rather than an attack:
What would your ideal solution look like in this space? Do you think it would be possible to implement solutions LIKE this in broad concept (a verifiable chain of trust from boot) but that were vendor independent?
This is a really good, and fair, question! I’ve thought about this a fair amount, but I’m definitely not an expert and am easily confused by the many acronyms e.g. from the article. Anyways, from what I can tell, I think having an extra chip, etc is fine. An ideal solution in this space might be something similar to what they are pushing, but treats the 4 user freedoms[1] as a first class citizen. Like, I understand that you don’t want bad actors to be able to replace keys or whatever, but it shouldn’t be impossible to do that, and microsoft shouldn’t be the gatekeeper. I understand that you don’t want ‘tampered’ devices to join your network or play your game (because, omg cheaters!!…), but the mechanism used to verify that should allow for exceptions where employees / students / users can use other sysadmin/IT/department-“approved” operating systems, not whatever microsoft says is “trusted’.
This pluton thing seems to run non-free firmware, with 0 chance of me or anyone else being able to build fw for it and use it. The drivers and whatever userspace components required for this thing also seem to be non-free, and windows-only. And if microsoft kills the 3rd party CA for secure boot, then it’s suddenly impossible (I think?) to boot anything else but windows. Pluton is 100% microsoft / windows centric, so if it works with anything outside of their products then it’s a bug / coincidence, basically.
Maybe I’m being overly cynical, but this seems like start of the “Extinguish” phase of EEE… Microsoft: “you don’t need to install Linux, *BSD, whatever anymore, you can run the same userspace under Windows with WSL now! So no one should have a problem with these changes!” OEMs: “Yeah!”
Anyways, I can’t really go into any specific about how I’d come up with something better, since a lot of the technicalities are waaaay over my head. My main beef with this is it’s microsoft doing what microsoft has always done for the last 30+ years, non-microsoft customers be damned. Thanks for the message though, I want to keep thinking about your question, because it’s spot on… this pluton stuff does attempt to address some real problems (though I’d argue that combatting game cheaters by throwing away user freedoms is not a real problem/solution), and folks are not going to easily dismiss pluton if the alternative is “do nothing” about the real problems it does attempt to address.
Honest question: Other than the Pinebooks and the System76 machines, how many computers buyable by consumers on the market today actually meet these criteria?
The Lenovo laptops many Linux fans prize have proprietary binary blobs all through them as far as I understand.
I love the principles you’re citing here, I’m just curious how pragmatic it is for many people to actually live by them.
I’m replying to you now on a Librem 14 laptop, which runs coreboot and has had the ME “nuetered”. The CPU is an Intel skylake variant (coffee lake I think?), because I believe later CPU generations require even more non-free firmware and I don’t think Purism has figured out how to proceed there. There’s also the Framework laptop (and recent announcement from HP), but those run more non-free blobs. And I think Dell is still selling their XPS 13 with Linux pre-installed. But as I mentioned in the Pluton article comments, being able to install Linux isn’t really helpful for promoting/realizing the 4 freedoms and such. On the bright side, there are so many laptops shipping with Linux today than I ever remember in the past. On the other hand, this may be the peak of the “golden age” of having multiple choices for an out-of-box Linux system :(
Ya, the situation now is becoming less and less ideal. And the “free software or bust” community isn’t big or strong enough to counter this movement. We need legislative action to help.. guide chip factories and OEMs, which (IIRC you’re in the US), isn’t going to happen here :P
I feel like the only REAL thing we can do other than shaking our fists and venting on the various forums is vote with our wallets and try to convince others to do the same.
We can build software and compete with these clowns. The chips are coming (it takes a stupidly long time to go from idea to product in the chip world) and we’ve got to work on software distribution models that are democratic and can be trusted. I feel very ignored.
Great! Sincerely, I would love love LOVE to see this happen!
The problem I see is that the way we currently allocate resources in a capitalist society is to put dollars towards engineering hours.
Volunteers can move mountains, but at the end of the day even the most virtuous free software advocate has to keep a root over their heads and feed themselves.
Yeah exact. We need to pay people and we can only do it by making up our own money… only way people will accept this money is if it is perceived as legit (i.e. has to be persuasive and that can only happen if enough people are defending the definition).
HEADS is already that to some extent, you can already have a nitropad (but yes this goes further than that by having the chip in the processor and hiding the keys better but in principle it is the same, someone already mentioned [in this thread] how you could maintain a HEADSy model with this kind of tech … personally I think there are other attack surfaces to think about before over committing to this aspect).
A hardware RoT is basically table stakes for a modern computer now.
This is not a universally-held opinion, especially given the inability to independently verify the correctness of such hardware. TPM manufacturers have not been forthcoming with the community.
Just because Apple and Google have monopolistic control on their devices, does that mean Microsoft does too? I agree with the OP, that contributing to an open, libre platform, would garner more trust and transparency and not let Window Update be the arbiter for changes to the unit.
While the article can be seen a bit as a slippery slope, the thought exercise is valuable to consider what could happen and I don’t see a good reason why we should trust what the vendors are doing. I recently purchased a laptop and while a coworker in the EU could buy his device without Windows, my region had no such option. If these features are in the future a requirement to ‘use’ the device, I sure as heck better be able to opt out of Windows—and not just at a checkout, but after buying a used device as well. Just as I wish it were less of a hassle to set up a de-Googled custom ROM of Android, I want the laptop/desktop space to remain ‘hackable’ for the consumer.
Just because Apple and Google have monopolistic control on their devices, does that mean Microsoft does too?
No, just because a hardware root of trust is an absolute minimum security posture for all competing devices means that Windows devices should provide one too.
If these features are in the future a requirement to ‘use’ the device, I sure as heck better be able to opt out of Windows—and not just at a checkout, but after buying a used device as well. Just as I wish it were less of a hassle to set up a de-Googled custom ROM of Android, I want the laptop/desktop space to remain ‘hackable’ for the consumer.
Pluton-enabled devices are far more friendly than Android devices in this regard. You can toggle a configuration switch to use the other root cert and then there’s a process that’s used by a load of Linux vendors to get their copies of GRUB signed and to boot Linux with a full secure boot chain. If you boot from a dm-verity root fs, then everything in the root filesystem is similarly protected. Pluton then exposes TPM 2.0 APIs to Linux, which can then use the secure boot attestation to authorise disclosure of keys for LUKS and mount a dm-crypt (+dm-integrity)-protected mutable filesystem.
Secure according to which measure though? I should be able to detach my storage and mount it on another machine to read and repair it if I know my keys. How do I get these TPM keys if it’s in the black box on the device I own (besides side channel attacks)? Even if I could do this through LUKS or whatever, do I want to? LUKS or a filesystem’s entryprion already provides me pretty good encryption and I know who and what generated the keys and where they live because I did it when I formatted my drive. Pluton’s a “chip-to-cloud security vision” sounds like complexity in that pipeline that opens me up to a different vector of issues.
When you couple Pluton with Smart App Manager (forgot the name) doesn’t this allow Microsoft to be the arbiter of what apps are good/bad and what it considers safe/compromised (like the issues Android users can have with SafetyNet if they want a custom ROM or root access to their purchased device)… and its store to be the eventual final ‘trusted’ space to get apps just like the Apple and Play Stores?
I know this is just a flurry of questions and I don’t think it’s fair you need to play spokesperson, but TPM was very unpopular and now it’s a requirement to upgrade—and Pluton is disabled by default by Lenovo and Dell, but why if it’s so safe? Who’s to say users want this? I can disagree but understand why businesses would, but I don’t understand how this should just be accepted as a good thing for personal and private users to not get the keys to their own device. I can have a paper backups in a fire safe for most other forms of encryption but I can’t for TPM?
I should be able to detach my storage and mount it on another machine to read and repair it if I know my keys. How do I get these TPM keys if it’s in the black box on the device I own (besides side channel attacks)?
I’m not a LUKS expert, but I believe that it stores a block that contains the disk keys, encrypted with a key on the TPM. The TPM will decrypt this block and then the kernel does all of the crypto with keys that it knows for normal operations. It will also spit out a recovery key, which is the decrypted on-disk key and lets you mount the disk on another system.
On Windows, for domain-connected machines, the BitLocker keys can be automatically stored in active directory, so that your IT folks can unlock the disk but thieves can’t (assuming no BitLocker vulnerabilities). I don’t know if Red Hat or Canonical provide something like this for LUKS, but it wouldn’t be too hard to build on an LDAP server.
LUKS or a filesystem’s entryprion already provides me pretty good encryption and I know who and what generated the keys and where they live because I did it when I formatted my drive
And how do you enter them on boot? You need either an external key (stealable along with your laptop) or you remember a long pass phrase.
How do you know that the kernel that you’re entering the passphrase into is really the Linux kernel that you trust? Without a secure boot chain, someone who briefly has physical access would be able to replace your kernel or GRUB (whatever is not on the encrypted disk, which you use to mount the encrypted FS) with one that will store the key somewhere insecure for when they steal the machine.
When you couple Pluton with Smart App Manager (forgot the name) doesn’t this allow Microsoft to be the arbiter of what apps are good/bad and what it considers safe/compromised (like the issues Android users can have with SafetyNet if they want a custom ROM or root access to their purchased device)… and its store to be the eventual final ‘trusted’ space to get apps just like the Apple and Play Stores?
You don’t need a TPM for Windows (or any other OS) to decide whether to run a signed binary or not. Pluton changes nothing here.
Who’s to say users want this?
At least for corporate customers, our market research data does. Home users also like things like Windows Hello, which requires a secure credential store to support WebAuthn and allow you to sign into web sites without needing a password. I, personally, like knowing that my GitHub login credentials (for example) can’t be exfiltrated by a kernel-level compromise on my machine. I like knowing that if someone steals my computer, they don’t get access to my files. And I really like that this is now becoming a baseline security standard and so I get the same guarantees from my Mac, my Windows machine, my Android phone and my iPad.
I can’t say I trust IT folks or keys on any server that isn’t mine. At this point I don’t know that I could work with an employer where it’s not BYOD, so unsure if this overlaps with me. I could maybe understand no one having access to the private keys, but it sounds like someone does and that someone isn’t me.
And how do you enter them on boot?
I have a long arduous password, and I’m pretty fine with this. It written down in a safe place too. I’m not okay with this key being in the black box that connects to a server.
With NixOS though, the encryption of a lot of the device is irrelevant though and actively harmful to encryption since the machine becomes so stateless that an attacker could work backwards to figure out private keys given so many things are reproducible to an exact state (so things not in /home, /var, similar aren’t encrypted). I’d be curious how well a system with a general attack would handle the Nix store needing to be a certain way to boot or not–not anywhere near an expert at this level of the machine.
You don’t need a TPM [..] run a signed binary
But Pluton can help act like SafetyNet, no? And how I ended up switching banks after they no longer let me use MagiskHide because my device should be mine and if I want root to install some privacy apps and kick out parts of Google, it’s not my bank’s business.
Windows Hello … guarantees [..] from my Android phone
These aren’t things I generally want or care about–nor do I want to trust some AI’s facial recognition algo nor the internet connection and Microsoft account requirement for setup. Some passwords are in my head, but most things are behind FIDO2 or TOTP 2FA–both of which do a decent job with the password situation without involving that black box or having a single point of failure. My phone even de-Google’d often times feels more like a kiosk than any other device I’ve had. If Linux support was just a little better, I’d drive that instead too.
At least for corporate customers, our market research data does
Dell won’t include Microsoft’s Pluton technology in most of its commercial PCs, telling The Register: “Pluton does not align with Dell’s approach to hardware security and our most secure commercial PC requirements.”
Says that Pluton seems off to not just the Linux base, but OEMs too. Microsoft having no concern about Dell & Lenovo seems a bit odd.
I think it’s odd that you speak in such abolutist terms about your “ownership” of your devices, and your refusal to let anyone else ever compromise your “ownership” by setting out terms on what you can or can’t do, but every one of your examples actually consists of you demanding access to other people’s devices (well, their services, which is the same thing because those services run on their devices) and you demanding the right to dictate to them the terms on which you will receive that access. Do they not have the same rights of “ownership” over their things as you? Do they not have the same right to set terms of their choosing and tell you that you don’t “own” their devices?
When the service they’re offering is access to something I own, I at least would agree that they have an obligation to let me shoot myself in the foot if that’s what I really want. Show a warning about installing on a rooted device, sure - but don’t go on to block access. For most of us, a bank isn’t really exposed to any meaningful risk if I install their app on a rooted device - only I am, because it’s my banking info that’s being exposed to other malicious apps on the device. I didn’t see any other examples of access to things that companies own in that post, unless you’re arguing that it is Google’s phone (which may be the case in practice but certainly shouldn’t be).
EDIT: If Pluton had some support that allowed the user to control its decisions I think it would be a lot more comfortable. It wouldn’t have to go through the OS, since obviously that’d just regress to square one. It wouldn’t have to be convenient, either, since it should be a pretty rare case that you need to do it. It probably should be pretty cheap compared to processors themselves.
I’d be happy to have some peripheral you plug the chip into during build, and need to enter a code that came with the chip’s manual from the manufacturer, at which point you have to change the code. Chip won’t boot unless the code’s changed. If you don’t care, you just do that and then plug it into the socket as normal; otherwise you can edit the roots of trust freely and be on your way until the next time you want a change, in which case you have to go through the ordeal of unseating the processor from your motherboard and doing it again.
I at least would agree that they have an obligation to let me shoot myself in the foot if that’s what I really want. Show a warning about installing on a rooted device, sure - but don’t go on to block access.
Again: they’re not obligated to give you access to their systems. Remembering that the example cited was a bank, the user has plenty of other options besides using their rooted mobile device where the bank can no longer trust that, for example, its own app is being run unmodified. They can almost certainly still access via a web browser (which is inherently a less trustworthy environment and thus one the bank is less likely to restrict as much as app access), or call, or go in person to a branch.
And by saying that they should still provide access to their systems you are still effectively claiming the right to dictate to them how they will use what they own. Which is what the poster I replied to was saying they would not allow anyone else to do to them. The position thus remains inconsistent.
Pluton-enabled devices are far more friendly than Android devices in this regard. You can toggle a configuration switch to use the other root cert […]
Currently, yeah. At least the Lenovo devices people were complaining about did. But I’m pretty sure this is up to the vendor, right? Just like whether the bootloader on an Android device is unlockable or not.
Thank you very much for this response. It’s super refreshing to see someone address the actual technology aspect.
Is there any reason Linux distros couldn’t run on Plutonium machines if, for example, distros worked with MSFT in a similar way to what they already do to get their keys signed for SecureBoot?
Pardon if it’s a silly question, my understanding of the technical details in this space is tenuous at best.
Is there any reason Linux distros couldn’t run on Plutonium machines if, for example, distros worked with MSFT in a similar way to what they already do to get their keys signed for SecureBoot?
Absolutely none. By default, Pluton contains two public keys for SecureBoot: One used by Microsoft to sign Windows and one used to sign other bootloaders. Linux distros can use this GitHub repo to provide a GRUB shim to sign. That shim will then include the public key that they use for signing their kernel and so on and so lets you establish a complete secure boot chain.
The difficult thing at the moment is to provision a personal or per-org key. The thing I’d like is to be able to install my own public key so that the device will boot only a kernel that I, personally, sign and so I can compile my own custom kernel with my favourite configuration options, sign it, and boot it. A process for doing that securely is fairly hard.
I’m really excited about releases like these. It’s just “finishing” the language and not adding any new features conceptually, just making it all more cohesive. The lack of Enum default derive is one of those things I run into rarely, but adding it in now reduces the conceptual overhead of Rust since deriving default now works on both Enums and Structs.
I also think this shows the benefit of Rust’s release model, which is that smaller fixes and stabilizations can be made over time, and don’t have to be part of a larger release. I’m curious how the potential Rust stabilization in GCC affects things, especially when smaller fixes in a release like this might be nice in older versions of Rust (and as far as I know GCC is targeting an older Rust version).
Rust has a fixed 6-week release train model. Nobody decides which release is going to be small or not. When stuff is ready, it lands.
Once in a while a large feature that took years to develop lands, and people freak out and proclaim that Rust is changing to fast and rushing things, as if it invented and implemented every feature in 6 weeks.
In this release: cargo add feature request was filed 8 years ago. Implementation issue has been opened 4 years ago. It waited for a better TOML parser/serializer to be developed, and once that happened, the replacement work started 5 months ago.
Yeah the gcc-rs project. I wonder about certain stabilizations in later versions of Rust which are very easily added to earlier versions of Rust built by gcc-rs. I don’t think it will affect mainline Rust, but if certain nice-to-haves, or more importantly unsound fixes, are backported for gcc-rs that could cause an unfortunate schism in the ecosystem.
I haven’t seen or heard of anything indicating this might happen, but with multiple implementations in-use I do think it is something that will eventually occur (especially for safety-related and unsound concerns)
I have never liked the idea of a Default trait or typeclass. Default with respect to what operation? Most times people want defaulting, they seem to have some kind of monoidal operation in mind.
Initialization right? I don’t see how one would use the trait for any other operation. To me it seems quite natural that a number has a “reasonable default” (0) as well as a string (""). It’s not like the language forces you to use Default in case you have other defaults in mind.
Interesting that any trait with only methods and no functions can be implemented for the never type
In other contexts, that is called the bottom type, because it sits at the bottom of the type lattice, and is therefore a subtype of every other type. So I find this fairly unsurprising.
No. ‘bottom’ refers to subtyping relationships, not compositional relationships. It’s true you can make anything from (), |, and *. But () isn’t a subtype of ()|().
E: this is general nomenclature. It’s possible rust has its own vocabulary I don’t know about.
Would ‘grepability’ be solved by having the source code of direct dependencies downloaded to a subfolder of the project. Almost a decade ago when I used to code in Java, maven would download the source code of all the dependencies and it was trivial to search through them if I needed to. I have not messed with Rust yet and so I am wondering if crate has similar capabilities?
This looks really nice! I tried Guix recently and thought the CLI, documentation, and language (Guile) were really nice and approachable. I hit some problems defining a new package, but I’ll probably have another go now with this tool.
One of the issues was working out what imports to add for definitions that
guix import
generates, but this UI seems to solve that. Another was that the licenses have unusual GNU names like “Expat” for MIT, and “ASL 2.0” for Apache 2.0 which seems to have carried over to this UI. Maybe the SPDX identifiers could be added as behind-the-scenes tags for the dropdown’s search somehow?The only other major barrier for contribution I can see is trying to navigate GNU Savannah and sending email patches, but maybe others find those easier than I do.
I’ve never really used Savannah, but if you’re submitting patches to any email based project like guix and don’t want to figure out the CLI, you can use https://git.sr.ht which has a GUI for sending email patches from branches you push
Thanks! I actually tried this out very recently but somehow it didn’t cross my mind that it could send patches to non-Sourcehut projects.
I also found this CLI tool pyonji which is meant to be easier to use than git-send-email, but I haven’t tried it yet
Am I having a fever dream? I hope they choose Forgejo over GitHub or other proprietary Git forges.
What might happen is that they end up using a variety of forges and many different repos. Today, Linux already has many repos, and maintainers apply email patches to their own repos, then higher up maintainers pull changes from those repos. Today, parts of the DRM subsystem are already hosted on the freedesktop.org GitLab. Other parts of Linux could do the same.
I think Gerrit is pretty close to existing model: patches + review on commit message.
Phabricator as well but it’s not well maintained nowaday.
This is so sad. Patch-rebase is worlds better than GitHub/GitLab/Gitea’s branch-bound merge model.
GitHub’s PR system relies on git. A system on top of git could be built where the ‘patcher’ client specifies the url of their git repo and branch and gets back a uuid for the ‘patch’ uuid. System server automatically attempts to git merge with a rebase strategy. If it fails, the patcher must resubmit (even a different url and branch) with conflicts resolved. Simply git pushing to the default branch on GitHub will close the pull request. The system knows the remote, so happens automatically. This can be applied across code hosts. The system has a rule to only merge rebased branches with no conflict.
Hypothetically, if codehosts are added (hopefully by following some api of the system), a project could have a unified system across code hosts, not caring whether it’s a GH PR or a patch-rebase email (abstract it into a branch model). It could even populate PRs on GitHub, or other platform, regardless of how the patch originated.
I’m sort of working on this secondarily. I’ve proven it works with scripts.
What do you think of SourceHut?
SourceHut uses the e-mail model Linus just said is not really working anymore.
As a data point, I tried both Sourcehut and Codeberg (forgejo) last week and thought both were really very well made.
I don’t think Sourcehut’s email workflow works for me personally though, and Codeberg’s CI isn’t quite finished. Sourcehut’s CI was probably the best I’ve ever tried, so my ideal right now would be Forgejo using builds.sr.ht for CI.
Maybe in the future federation will make the boundaries between forges thinner and projects can use multiple at once, and also contributions to non-GitHub projects may increase.
Very cool! I’m hoping to get to work with CHERIoT at my job eventually and we have some of these boards, so this will be fun to try
Can you share what you will be doing with CHERIoT? We’re always looking for customers for the ASIC version…
We’re working on the sunburst boards/platforms at lowRISC. Maybe some of the labs and demos that come from that will work with the ASIC too depending on what’s in the package?
Ah, I wasn’t aware that you worked there. I’m looking to lowRISC starting to engage with the CHERIoT open source project at some point.
Hah! That is a solved problem that deserves to be better known.
I’m a big enthusiast for git-revise, but the maintainer has been busy for a while. There are open PRs, including for the filename support as shown. Shameless plug: I have this branch of commits that I hope to upstream (I don’t want to fork it, as I don’t want to take the project in any new direction).
That’s very cool! I find editing hunks quite difficult, so my technique for splitting a commit is:
edit
the commit I want to split ingit rebase -i
git commit --fixup HEAD
those changesgit revert HEAD
to make another commit which adds those changes I just deleted (the post-split commit)Being able to edit and test code as normal in step 2 is really helpful. That’s really hard to do when you’re trying to define your cut using unstaged/staged hunks!
See also OpenTitan which I think has similar security goals but without Pluton’s Windows lock-in. I’m not familiar enough with either to know how much they’re similar/different.
[ Disclaimer: I work for Microsoft and have collaborated with the Pluton team on some things, but am not involved in the push to put Pluton everywhere and only talk to the Windows team occasionally. ]
I am familiar with Pluton and have skimmed some of the OpenTitan docs and the docs of some hardware roots of trust from other vendors. They are all very similar: they provide some key management and hardware crypto functionality as fixed-function units and a general-purpose core that’s tightly coupled and provides some more user-friendly (or, in the case of the TPM, standard but pretty awful) APIs for the rest of the system. Pluton has a fairly rich policy language for keys (for example, allowing a key to be used only as input to a KDF and then imposing policies on what the derived key can be used for) and some neat mitigations against physical attackers that, apparently, I’m not allowed to talk about (any time you talk about a particular security defence publicly, it motivates a load of people like @saaramar and his friends to go and try to break it), but it’s not massively different from any of the alternative implementations.
A hardware RoT is basically table stakes for a modern computer now. Apple has one, Android phones have them. TPMs have been around for a while, but they generally fall into two categories of suckiness:
This means that, for security, you really want a separate core (so isolated from side channels) that’s on package (so hard to physically attack without destroying it). Apple and Google both know that, which is why they put such a thing on their devices. Both Google and Apple have a lot more control over their respective hardware ecosystems than Microsoft, so can do this much more easily.
I strongly suspect that if Intel and AMD had built decent (secure, reliable) on-package TPM implementations then there wouldn’t have been so much of a push for Pluton.
How about considering that it’s bad for any one company to have complete control over an ecosystem? It’s good that microsoft feels left out for not controlling the PC ecosystem. It’s bad that google and apple dictate what users can and cannot do with their devices.
One of the things I enjoy about your posts is that you’re an ardent advocate for freedom and open-ness in computing but you seem to be reasonable about it, so here’s a question I hope you’ll read in the spirit it was meant rather than an attack:
What would your ideal solution look like in this space? Do you think it would be possible to implement solutions LIKE this in broad concept (a verifiable chain of trust from boot) but that were vendor independent?
heh, thanks :)
This is a really good, and fair, question! I’ve thought about this a fair amount, but I’m definitely not an expert and am easily confused by the many acronyms e.g. from the article. Anyways, from what I can tell, I think having an extra chip, etc is fine. An ideal solution in this space might be something similar to what they are pushing, but treats the 4 user freedoms[1] as a first class citizen. Like, I understand that you don’t want bad actors to be able to replace keys or whatever, but it shouldn’t be impossible to do that, and microsoft shouldn’t be the gatekeeper. I understand that you don’t want ‘tampered’ devices to join your network or play your game (because, omg cheaters!!…), but the mechanism used to verify that should allow for exceptions where employees / students / users can use other sysadmin/IT/department-“approved” operating systems, not whatever microsoft says is “trusted’.
This pluton thing seems to run non-free firmware, with 0 chance of me or anyone else being able to build fw for it and use it. The drivers and whatever userspace components required for this thing also seem to be non-free, and windows-only. And if microsoft kills the 3rd party CA for secure boot, then it’s suddenly impossible (I think?) to boot anything else but windows. Pluton is 100% microsoft / windows centric, so if it works with anything outside of their products then it’s a bug / coincidence, basically.
Maybe I’m being overly cynical, but this seems like start of the “Extinguish” phase of EEE… Microsoft: “you don’t need to install Linux, *BSD, whatever anymore, you can run the same userspace under Windows with WSL now! So no one should have a problem with these changes!” OEMs: “Yeah!”
Anyways, I can’t really go into any specific about how I’d come up with something better, since a lot of the technicalities are waaaay over my head. My main beef with this is it’s microsoft doing what microsoft has always done for the last 30+ years, non-microsoft customers be damned. Thanks for the message though, I want to keep thinking about your question, because it’s spot on… this pluton stuff does attempt to address some real problems (though I’d argue that combatting game cheaters by throwing away user freedoms is not a real problem/solution), and folks are not going to easily dismiss pluton if the alternative is “do nothing” about the real problems it does attempt to address.
Honest question: Other than the Pinebooks and the System76 machines, how many computers buyable by consumers on the market today actually meet these criteria?
The Lenovo laptops many Linux fans prize have proprietary binary blobs all through them as far as I understand.
I love the principles you’re citing here, I’m just curious how pragmatic it is for many people to actually live by them.
I’m replying to you now on a Librem 14 laptop, which runs coreboot and has had the ME “nuetered”. The CPU is an Intel skylake variant (coffee lake I think?), because I believe later CPU generations require even more non-free firmware and I don’t think Purism has figured out how to proceed there. There’s also the Framework laptop (and recent announcement from HP), but those run more non-free blobs. And I think Dell is still selling their XPS 13 with Linux pre-installed. But as I mentioned in the Pluton article comments, being able to install Linux isn’t really helpful for promoting/realizing the 4 freedoms and such. On the bright side, there are so many laptops shipping with Linux today than I ever remember in the past. On the other hand, this may be the peak of the “golden age” of having multiple choices for an out-of-box Linux system :(
Ya, the situation now is becoming less and less ideal. And the “free software or bust” community isn’t big or strong enough to counter this movement. We need legislative action to help.. guide chip factories and OEMs, which (IIRC you’re in the US), isn’t going to happen here :P
Exactly.
I feel like the only REAL thing we can do other than shaking our fists and venting on the various forums is vote with our wallets and try to convince others to do the same.
groans
We can build software and compete with these clowns. The chips are coming (it takes a stupidly long time to go from idea to product in the chip world) and we’ve got to work on software distribution models that are democratic and can be trusted. I feel very ignored.
Great! Sincerely, I would love love LOVE to see this happen!
The problem I see is that the way we currently allocate resources in a capitalist society is to put dollars towards engineering hours.
Volunteers can move mountains, but at the end of the day even the most virtuous free software advocate has to keep a root over their heads and feed themselves.
It’s a hard problem.
Yeah exact. We need to pay people and we can only do it by making up our own money… only way people will accept this money is if it is perceived as legit (i.e. has to be persuasive and that can only happen if enough people are defending the definition).
HEADS is already that to some extent, you can already have a nitropad (but yes this goes further than that by having the chip in the processor and hiding the keys better but in principle it is the same, someone already mentioned [in this thread] how you could maintain a HEADSy model with this kind of tech … personally I think there are other attack surfaces to think about before over committing to this aspect).
This is not a universally-held opinion, especially given the inability to independently verify the correctness of such hardware. TPM manufacturers have not been forthcoming with the community.
Just because Apple and Google have monopolistic control on their devices, does that mean Microsoft does too? I agree with the OP, that contributing to an open, libre platform, would garner more trust and transparency and not let Window Update be the arbiter for changes to the unit.
While the article can be seen a bit as a slippery slope, the thought exercise is valuable to consider what could happen and I don’t see a good reason why we should trust what the vendors are doing. I recently purchased a laptop and while a coworker in the EU could buy his device without Windows, my region had no such option. If these features are in the future a requirement to ‘use’ the device, I sure as heck better be able to opt out of Windows—and not just at a checkout, but after buying a used device as well. Just as I wish it were less of a hassle to set up a de-Googled custom ROM of Android, I want the laptop/desktop space to remain ‘hackable’ for the consumer.
No, just because a hardware root of trust is an absolute minimum security posture for all competing devices means that Windows devices should provide one too.
Pluton-enabled devices are far more friendly than Android devices in this regard. You can toggle a configuration switch to use the other root cert and then there’s a process that’s used by a load of Linux vendors to get their copies of GRUB signed and to boot Linux with a full secure boot chain. If you boot from a dm-verity root fs, then everything in the root filesystem is similarly protected. Pluton then exposes TPM 2.0 APIs to Linux, which can then use the secure boot attestation to authorise disclosure of keys for LUKS and mount a dm-crypt (+dm-integrity)-protected mutable filesystem.
Secure according to which measure though? I should be able to detach my storage and mount it on another machine to read and repair it if I know my keys. How do I get these TPM keys if it’s in the black box on the device I own (besides side channel attacks)? Even if I could do this through LUKS or whatever, do I want to? LUKS or a filesystem’s entryprion already provides me pretty good encryption and I know who and what generated the keys and where they live because I did it when I formatted my drive. Pluton’s a “chip-to-cloud security vision” sounds like complexity in that pipeline that opens me up to a different vector of issues.
When you couple Pluton with Smart App Manager (forgot the name) doesn’t this allow Microsoft to be the arbiter of what apps are good/bad and what it considers safe/compromised (like the issues Android users can have with SafetyNet if they want a custom ROM or root access to their purchased device)… and its store to be the eventual final ‘trusted’ space to get apps just like the Apple and Play Stores?
I know this is just a flurry of questions and I don’t think it’s fair you need to play spokesperson, but TPM was very unpopular and now it’s a requirement to upgrade—and Pluton is disabled by default by Lenovo and Dell, but why if it’s so safe? Who’s to say users want this? I can disagree but understand why businesses would, but I don’t understand how this should just be accepted as a good thing for personal and private users to not get the keys to their own device. I can have a paper backups in a fire safe for most other forms of encryption but I can’t for TPM?
I’m not a LUKS expert, but I believe that it stores a block that contains the disk keys, encrypted with a key on the TPM. The TPM will decrypt this block and then the kernel does all of the crypto with keys that it knows for normal operations. It will also spit out a recovery key, which is the decrypted on-disk key and lets you mount the disk on another system.
On Windows, for domain-connected machines, the BitLocker keys can be automatically stored in active directory, so that your IT folks can unlock the disk but thieves can’t (assuming no BitLocker vulnerabilities). I don’t know if Red Hat or Canonical provide something like this for LUKS, but it wouldn’t be too hard to build on an LDAP server.
And how do you enter them on boot? You need either an external key (stealable along with your laptop) or you remember a long pass phrase.
How do you know that the kernel that you’re entering the passphrase into is really the Linux kernel that you trust? Without a secure boot chain, someone who briefly has physical access would be able to replace your kernel or GRUB (whatever is not on the encrypted disk, which you use to mount the encrypted FS) with one that will store the key somewhere insecure for when they steal the machine.
You don’t need a TPM for Windows (or any other OS) to decide whether to run a signed binary or not. Pluton changes nothing here.
At least for corporate customers, our market research data does. Home users also like things like Windows Hello, which requires a secure credential store to support WebAuthn and allow you to sign into web sites without needing a password. I, personally, like knowing that my GitHub login credentials (for example) can’t be exfiltrated by a kernel-level compromise on my machine. I like knowing that if someone steals my computer, they don’t get access to my files. And I really like that this is now becoming a baseline security standard and so I get the same guarantees from my Mac, my Windows machine, my Android phone and my iPad.
I can’t say I trust IT folks or keys on any server that isn’t mine. At this point I don’t know that I could work with an employer where it’s not BYOD, so unsure if this overlaps with me. I could maybe understand no one having access to the private keys, but it sounds like someone does and that someone isn’t me.
I have a long arduous password, and I’m pretty fine with this. It written down in a safe place too. I’m not okay with this key being in the black box that connects to a server.
With NixOS though, the encryption of a lot of the device is irrelevant though and actively harmful to encryption since the machine becomes so stateless that an attacker could work backwards to figure out private keys given so many things are reproducible to an exact state (so things not in
/home
,/var
, similar aren’t encrypted). I’d be curious how well a system with a general attack would handle the Nix store needing to be a certain way to boot or not–not anywhere near an expert at this level of the machine.But Pluton can help act like SafetyNet, no? And how I ended up switching banks after they no longer let me use MagiskHide because my device should be mine and if I want root to install some privacy apps and kick out parts of Google, it’s not my bank’s business.
These aren’t things I generally want or care about–nor do I want to trust some AI’s facial recognition algo nor the internet connection and Microsoft account requirement for setup. Some passwords are in my head, but most things are behind FIDO2 or TOTP 2FA–both of which do a decent job with the password situation without involving that black box or having a single point of failure. My phone even de-Google’d often times feels more like a kiosk than any other device I’ve had. If Linux support was just a little better, I’d drive that instead too.
Meanwhile at The Register: https://www.theregister.com/2022/03/09/dell_pluton_microsoft/
Says that Pluton seems off to not just the Linux base, but OEMs too. Microsoft having no concern about Dell & Lenovo seems a bit odd.
I think it’s odd that you speak in such abolutist terms about your “ownership” of your devices, and your refusal to let anyone else ever compromise your “ownership” by setting out terms on what you can or can’t do, but every one of your examples actually consists of you demanding access to other people’s devices (well, their services, which is the same thing because those services run on their devices) and you demanding the right to dictate to them the terms on which you will receive that access. Do they not have the same rights of “ownership” over their things as you? Do they not have the same right to set terms of their choosing and tell you that you don’t “own” their devices?
When the service they’re offering is access to something I own, I at least would agree that they have an obligation to let me shoot myself in the foot if that’s what I really want. Show a warning about installing on a rooted device, sure - but don’t go on to block access. For most of us, a bank isn’t really exposed to any meaningful risk if I install their app on a rooted device - only I am, because it’s my banking info that’s being exposed to other malicious apps on the device. I didn’t see any other examples of access to things that companies own in that post, unless you’re arguing that it is Google’s phone (which may be the case in practice but certainly shouldn’t be).
EDIT: If Pluton had some support that allowed the user to control its decisions I think it would be a lot more comfortable. It wouldn’t have to go through the OS, since obviously that’d just regress to square one. It wouldn’t have to be convenient, either, since it should be a pretty rare case that you need to do it. It probably should be pretty cheap compared to processors themselves.
I’d be happy to have some peripheral you plug the chip into during build, and need to enter a code that came with the chip’s manual from the manufacturer, at which point you have to change the code. Chip won’t boot unless the code’s changed. If you don’t care, you just do that and then plug it into the socket as normal; otherwise you can edit the roots of trust freely and be on your way until the next time you want a change, in which case you have to go through the ordeal of unseating the processor from your motherboard and doing it again.
Again: they’re not obligated to give you access to their systems. Remembering that the example cited was a bank, the user has plenty of other options besides using their rooted mobile device where the bank can no longer trust that, for example, its own app is being run unmodified. They can almost certainly still access via a web browser (which is inherently a less trustworthy environment and thus one the bank is less likely to restrict as much as app access), or call, or go in person to a branch.
And by saying that they should still provide access to their systems you are still effectively claiming the right to dictate to them how they will use what they own. Which is what the poster I replied to was saying they would not allow anyone else to do to them. The position thus remains inconsistent.
Currently, yeah. At least the Lenovo devices people were complaining about did. But I’m pretty sure this is up to the vendor, right? Just like whether the bootloader on an Android device is unlockable or not.
No, supporting the alternative root is a requirement for certification from Microsoft.
I am sure that that requirement is antitrust CYA, but I’m still happy y’all have it. :)
Thank you very much for this response. It’s super refreshing to see someone address the actual technology aspect.
Is there any reason Linux distros couldn’t run on Plutonium machines if, for example, distros worked with MSFT in a similar way to what they already do to get their keys signed for SecureBoot?
Pardon if it’s a silly question, my understanding of the technical details in this space is tenuous at best.
Absolutely none. By default, Pluton contains two public keys for SecureBoot: One used by Microsoft to sign Windows and one used to sign other bootloaders. Linux distros can use this GitHub repo to provide a GRUB shim to sign. That shim will then include the public key that they use for signing their kernel and so on and so lets you establish a complete secure boot chain.
The difficult thing at the moment is to provision a personal or per-org key. The thing I’d like is to be able to install my own public key so that the device will boot only a kernel that I, personally, sign and so I can compile my own custom kernel with my favourite configuration options, sign it, and boot it. A process for doing that securely is fairly hard.
I’m really excited about releases like these. It’s just “finishing” the language and not adding any new features conceptually, just making it all more cohesive. The lack of Enum default derive is one of those things I run into rarely, but adding it in now reduces the conceptual overhead of Rust since deriving default now works on both Enums and Structs.
I also think this shows the benefit of Rust’s release model, which is that smaller fixes and stabilizations can be made over time, and don’t have to be part of a larger release. I’m curious how the potential Rust stabilization in GCC affects things, especially when smaller fixes in a release like this might be nice in older versions of Rust (and as far as I know GCC is targeting an older Rust version).
Rust has a fixed 6-week release train model. Nobody decides which release is going to be small or not. When stuff is ready, it lands.
Once in a while a large feature that took years to develop lands, and people freak out and proclaim that Rust is changing to fast and rushing things, as if it invented and implemented every feature in 6 weeks.
In this release:
cargo add
feature request was filed 8 years ago. Implementation issue has been opened 4 years ago. It waited for a better TOML parser/serializer to be developed, and once that happened, the replacement work started 5 months ago.This piques my interest. What library is this? Does it maintain comments/formatting?
The crate is toml_edit, and it does preserve comments and (most) formatting.
Maybe something to format Cargo.toml files could be helpful as well?
What do you mean? The gcc-rs project? I’d hope it doesn’t affect mainline Rust at all.
Yeah the gcc-rs project. I wonder about certain stabilizations in later versions of Rust which are very easily added to earlier versions of Rust built by gcc-rs. I don’t think it will affect mainline Rust, but if certain nice-to-haves, or more importantly unsound fixes, are backported for gcc-rs that could cause an unfortunate schism in the ecosystem.
I haven’t seen or heard of anything indicating this might happen, but with multiple implementations in-use I do think it is something that will eventually occur (especially for safety-related and unsound concerns)
I have never liked the idea of a
Default
trait or typeclass. Default with respect to what operation? Most times people want defaulting, they seem to have some kind of monoidal operation in mind.Initialization right? I don’t see how one would use the trait for any other operation. To me it seems quite natural that a number has a “reasonable default” (
0
) as well as a string (""
). It’s not like the language forces you to useDefault
in case you have other defaults in mind.How do I
ssh -X
with wayland?waypipe will proxy Wayland messages similarly to
ssh -X
.Stabilized Not for !.
Interesting that any trait with only methods and no functions can be implemented for the
never
type since they can’t be called: Primitive Type never.In other contexts, that is called the bottom type, because it sits at the bottom of the type lattice, and is therefore a subtype of every other type. So I find this fairly unsurprising.
Isn’t that the initial type? I thought the bottom type was () since we can always reach it
No. ‘bottom’ refers to subtyping relationships, not compositional relationships. It’s true you can make anything from (), |, and *. But () isn’t a subtype of ()|().
E: this is general nomenclature. It’s possible rust has its own vocabulary I don’t know about.
Would ‘grepability’ be solved by having the source code of direct dependencies downloaded to a subfolder of the project. Almost a decade ago when I used to code in Java, maven would download the source code of all the dependencies and it was trivial to search through them if I needed to. I have not messed with Rust yet and so I am wondering if crate has similar capabilities?
cargo vendor
is what you want for downloading dependency sources.