I wish this worked around some web-of-trust model instead of trusting a root CA (never really have, never really will…) but this seems like a really solid approach to a big issue with the library-based programming supply chain.
Sigstore is actually meant to work around different trust models!. Naturally, some models are more mature than others, but part of the research angle that I’m trying to bring into sigstore is around different approaches. I would love if we get to a model in which sigstore can accommodate usecases that are friendlier to F/OSS communities
Signing binaries and relying on a CA to establish “trust” is already a lost cause.
Did notarization significantly improved the OSX security ? To me it seems it was only a roadblock for OSS software.
On Windows there was malware signed with leaked developer certificates.
These are pretty different ecosystems: with end-user software signing, you’re relying on intermediate issuers (or, in macOS’s case, just Apple) to only issue certificates to “legitimate” entities. Sigstore is two or three levels below that: it’s intended to help engineers sign packages and distributions that larger programs and systems are made out of. Signing and verification are simpler and easier to automate in that context.
I am glad to see the sigstore (and related) project reach GA (general-availability). Initially (at least via cosign) they were focused on container signatures, but now it seems one can sign any kind of file, but especially those executables that are uploaded on GitHub.
The only issue I have is the fact that besides the artifact (say the executable), and besides the signature, one now also needs to upload the temporary certificate… I would have hoped (given that the signatures are uploaded into rekor, dare I say a “block-chain”), one could validate the artifact without any other additional files… Perhaps this use-case is on the road-map.
(Also perhaps now we can have something more secure than curl https://install.sh | bash…) :)
curl https://install.sh | bash
The problem with shipping a tree index, instead of shipping the certificate and signature as files, is that log responses are now on your critical path to execute software.
Even if one also submits the certificate and the signature files together with the download, I still think one needs to contact the log to check if the certificate is valid or not. (I don’t think the issued certificate is signed with a CA that is verifiable with the current root CA’s as present on various OS’s.)
What I would have preferred, both for security and simplicity, is an alternative way of signing that:
This way, the user has to download a single file and he can’t actually use that file unless he checks the signature. (How many of use just download random files on the internet and never bother to actually check the hash or the signature?)
Also, it simplifies the use-case I’ve hinted above: curl https://install.sh | cosign verify ... | bash or perhaps curl https://download.tar | cosign verify ... | tar -xz
curl https://install.sh | cosign verify ... | bash
curl https://download.tar | cosign verify ... | tar -xz
The root for cosign can still be downloaded ahead of time, which means non-Monitor verification is completely offline. (note: probablistic checking of log integrity is a good thing, but not required.) It shouldn’t be in the system trust store because it’s not used for TLS connections.
Outputting the data only if signature is valid? Yes please.
Slight OT, the whole idea of piping a random internet blob/artifact into a local blob is inherently insecure, no matter what those blobs happen to be. :-)
Couldn’t agree more, but such is the fashion these days… Everyone just stack-overflows and wants copy-paste magic…
So in the end, there is no semantic difference between wget https://executable.elf ; chmod +x ./executable.elf ; ./executable.elf and curl https://install.sh | bash. :)
wget https://executable.elf ; chmod +x ./executable.elf ; ./executable.elf
However in both cases, adding an extra signature check wouldn’t hurt.