1. 49
  1.  

  2. 30

    I am going to add a direct data point:

    From approximately 2007 to 2015 (it’s the weekend, I can’t be bothered to go look up the exact dates), I was the primary release manager of the Django web framework.

    I, being younger and idealistic, introduced as a standard practice the publication of detached PGP signatures for each release.

    Across the entire time I was release manager, I had concrete evidence that exactly one other entity (a Debian packager) was looking at those signatures. I know this because they caught a problem one time, and because they reached out to me over a non-PGP-verified channel to let me know about it.

    Django used to carefully publish a list of authorized release keys, used to hold keysigning parties at conferences to make sure the release keys all had a strong web of trust, etc. etc. – we really really tried to “do PGP right”. And it gained us absolutely nothing. It was an immense amount of effort to attach some metadata that was effectively unused and unneeded (since Debian was only checking it because it was there, not because they’d actually require it as a prerequisite to accept Django for packaging).

    Eventually we transitioned away from the authorized keys list and it just became “whoever’s the releaser for this one, sign it”. This is supposed to be a world-ending violation of PGP norms. The world kept spinning.

    Personally, because there are people who will be annoying if I don’t, on my own projects I still sign the release tags in the repositories and upload the signatures to PyPI with twine upload -s. But I still do not see what I am gaining, because signing in PyPI is like the old “you might be talking to Satan, but at least you know it’s private” joke with TLS/SSL. The package might have been uploaded by Satan, but at least you know nobody tampered with it en route!

    1. 13

      PGP is an insecure and outdated ecosystem that hasn’t reflected cryptographic best practices in decades.

      i feel like this is needlessly inflammatory.

      it’s absolutely true that tools like GnuPG have insecure defaults and painful UX, but OpenPGP-the-ecosystem is alive and well.

      IMO this is mostly thanks to the Sequoia PGP folks, but that work has been bearing fruit for years at this point.

      1. 29

        It’s inflammatory, but it’s also not remotely controversial in cryptographic circles:

        The OpenPGP ecosystem is absolutely not alive and well, if alive and well includes the things that the ecosystem has historically maintained (synchronizing keyservers, a web of trust, etc.). The existence of a memory safe implementation is good, but does very little to address the basic format and cryptographic problems that are baked into OpenPGP’s standards. Fixing those requires standards changes, at which point we’re better off with something without the baggage.

        1. 4

          when i say “alive and well” i mean that the core & associated technologies are being actively developed and there is a healthy ecosystem with strong support for existing standards and tools.

          SKS has (effectively) been completely deprecated by the community in favor of https://keys.openpgp.org; i don’t use web of trust at all and have no strong opinions on it.

          competing technologies like age are convenient but i have little confidence that they’ll ever see the same degree of support that OpenPGP has in software (e.g. git) or hardware (e.g. Yubikey, Nitrokey).

          EDIT: i feel like it’d be a little too long-winded to respond to all of those blog posts in a single comment, but just to pick on Matt Green: his criticisms of PGP are a little silly to apply here because he seems to be speaking mostly from the perspective of using it to secure communication (e.g. email and chat).

          perfect forward secrecy doesn’t really make sense in the context of OpenPGP when you have other tools for communication that implement cryptographic protocols designed for that purpose.

          1. 13

            [Matt Green’s] criticisms of PGP are a little silly to apply here because he seems to be speaking mostly from the perspective of using it to secure communication (e.g. email and chat).

            To a layman like me, PGP’s primary use case seems to be secure communication (specifically email). So PGP isn’t a good tool for this use case then?

            1. 6

              it depends entirely on what your threat model is; in its current state i wouldn’t recommend PGP for communication to a layperson, but for a savvy individual with specific threat models & use-cases it is still a best-in-class tool.

              for the average software developer/maintainer, however, PGP is probably most useful for authentication & signing (i.e. cryptographic identity) + as a primitive that other tools (with better UIs) can use for encryption operations.

              for authentication: i loaded up my PGP keys onto several Yubikeys and use them as my SSH credentials. between that & Secure Enclave on my mobile devices, i have almost completely done away with private keys on durable storage.

              for signing: one can use PGP to verify git commits (although this can be done with an SSH key now, not all forges support it).

              for encryption: PGP sucks to use directly but is fantastic in conjunction with tools like mozilla/sops & pass (or gopass) for sharing development secrets in a project without relying on 3rd-party infrastructure.

            2. 2

              Git needs signatures, which AFAIK age doesn’t do (you probably want minisign?). Git supports SSH signatures already tho.

            3. 4

              The controversial part is that all of these people you linked imply that we should be vulnerable to a few centralized third parties.

              1. 16

                The criticisms of PGP are not rooted in its lack of centralized ownership. They’re rooted in what a mess it is, both in terms of the user experience and the actual cryptography.

                1. 11

                  I get the impression that the political climate (for want of a better term) has changed in the security community. It used to be heavily invested in privacy, decentralization, and open platforms, and PGP ’s design reflects those values.

                  The new established wisdom is that centralization is good, open platforms are bad, and multinational corporations are seen as the most competent guardians of consumer privacy.

                  The arguments against PGP (including the author’s) all read as a disagreement about values, in the guise of a discussion about technical competence.

                  1. 20

                    I disagree. I think what has changed is that usability is now seen as a core part of security. Like the author said:

                    Security tools, especially cryptographic ones, are only as good as their least-informed and most distracted user.

                    1. 9

                      There are ways to use PGP that are kinda reasonably secure, and ways to use PGP that are interoperable.

                      Unfortunately, the ways that are secure are not interoperable, and the ways that are interoperable are not secure. Plenty of critiques, including the ones already linked, cover this in detail – if you want a more-secure setup in PGP, for example, you have to either do it in an interoperable/compatible way which requires letting other parties strip off the extra-security bits, or do it in a way that doesn’t allow stripping off the extra-security bits but is as a result non-compatible/non-interoperable with likely large numbers of PGP users’ configurations.

                      1. 5

                        TLS once faced similar issues, but an effort was made to fix it, gradually breaking compatibility with the insecure setups, despite millions of users worldwide being on old, outdated operating systems and browsers, without the ability to update, or even the desire to do so.

                        PGP’s installed user base is orders of magnitude smaller, technically savvy, and/or heavily invested in security.

                        1. 5

                          PGP’s installed user base is orders of magnitude smaller, technically savvy, and/or heavily invested in security.

                          Unfortunately, I think the developers of PGP implementations and tooling are much more invested in defending the idea that PGP as-is has no security problems that would need to be fixed by ripping out compatibility with ancient crypto algorithms. And even doing that doesn’t really fix all the potential problems with PGP’s design; like a lot of people have said, the preferable well-designed approach is to support one way to do things, and if it gets broken then you increment the protocol version and switch to a different single way of doing things.

                          1. 4

                            I’m not so sure. TLS and Signal have the advantage that they deal with ephemeral data. Software signatures have a far longer lifetime (and in fact, most of the authors’ criticisms are related to the signatures being old). I think it’s very easy to get into a situation where you’re supporting multiple protocol versions at the same time, (as for example PASETO does) effectively ending up in the same place.

              2. 5

                these results present a strong case against attempting to “rehabilitate” PGP signatures for PyPI, or any other packaging ecosystem

                Is there already an alternative in the works? I don’t know much about this space. I hear about this a lot: https://www.sigstore.dev/

                1. 25

                  The quiet part of Sigstore that’s buried in the docs:

                  Currently, you can authenticate with Google, GitHub, or Microsoft, which will associate your identity with a short-lived signing key.

                  Meaning, you cannot participate in this ecosystem as a signer without an account on one of these privacy-invasive, proprietary services. It wouldn’t surprise me if you can’t ever self-host either given that they seem to intend to have a list of OIDC IdP inclusion criteria that new IdPs will have to meet before being manually included.

                  1. 6

                    That list is out of date: BuildKite is also supported as an IdP, and (IIRC) GitLab is either supported or close to being supported.

                    The goal is to support high-quality IdPs that meet Sigstore’s claim and integrity requirements, which necessarily excludes random one-off IdPs run from someone’s home server. That doesn’t make them inherently privacy-compromising, however: there’s no reason why a community-hosted IdP couldn’t meet these requirements, and several of the IdPs currently supported (GitHub and BuildKite at the minimum) aren’t tied to any human identity in the first place.

                    1. 4

                      Right, OK, I guess this makes sense - if the entire security of the system rests on the certificate issued by the OIDC authentication process, you want to make sure that authentication process is good. It still makes me uncomfortable though, for a lot of reasons.

                      Also, doesn’t that mean that anyone with legal authority over the signing IdP can compel the issuance of valid signatures on malicious software? My understanding of Sigstore is pretty hazy (the OIDC stuff I’m only aware of because I was in a talk last year about it) so I could simply be misunderstanding, but that seems like a pretty bad threat model, particularly in a world where governments are increasingly moving to compel software developers to insert backdoors. My understanding is that Sigstore signatures are publicly auditable using append-only ledgers a la Certificate Transparency, but this is still… unideal. (Maybe that’s unfair of me though because the current status quo of individual developers carrying signing keys is also subject to government compulsion, unless developers are willing to go to jail, and isn’t even publicly discoverable?)

                      1. 3

                        Two points:

                        1. Sigstore is a public signing system: that means that anybody who can obtain an identity can sign for anything that they have the input for. That means that malicious people can make legitimate looking signatures (for some identity they control) for malicious things via legitimate IdPs; the scheme doesn’t attempt to enforce that only “good” people can sign. In this sense, Sigstore is very similar to Web PKI, and in particular very similar to Let’s Encrypt. The notion of trusted parties comes at the index or endpoint layers, via TUF, a TOFU setup, or something else[^1].

                        2. Transparency logs are indeed Sigstore’s main source of auditability, and the primary line of defense against CA compromise. I think there are a lot of legitimate reasons to think this is non-ideal – I think it’s non-ideal! But it’s also a pragmatic decision: CT has been shown to work for the Web PKI, and its distributed auditability has nice “knock-on” qualities (e.g. Firefox not needing to do privacy-compromising CT lookups because other large players do so).

                        Ultimately, I think it’s correct to point out that Sigstore’s adversarial model doesn’t include developers being forced into inserting backdoors into their own software. I think that’s correct to point out, because no codesigning scheme that I’m aware of can address that. What Sigstore does do is eliminate those adversaries’ stealth factor: if an identity is known to be compromised, the community is able to immediately see everything it signed for.

                        [^1]: If that sounds vague, it’s because it unfortunately is – it’s an area of active development within Sigstore, and in particular it’s something I’m working on.

                        1. 1

                          In this sense, Sigstore is very similar to Web PKI, and in particular very similar to Let’s Encrypt. The notion of trusted parties comes at the index or endpoint layers, via TUF, a TOFU setup, or something else[^1].

                          Sure, this makes sense to me. It’s also similar to the status quo; given an arbitrary PGP key signing some (possibly tampered-with) software, you don’t know whether that’s the PGP key of the legitimate author of that software or not.

                          Ultimately, I think it’s correct to point out that Sigstore’s adversarial model doesn’t include developers being forced into inserting backdoors into their own software. I think that’s correct to point out, because no codesigning scheme that I’m aware of can address that. What Sigstore does do is eliminate those adversaries’ stealth factor: if an identity is known to be compromised, the community is able to immediately see everything it signed for.

                          I’m uncomfortable with Sigstore’s model but I’m having trouble thinking through exactly why, so I apologize for being unclear/making silly points while I essentially think out loud on the internet - this apology goes for this comment and my grandparent comment, lol. :P

                          But I think actually what I was trying to get at in the grandparent comment is that this is increasing the number of entities that have to be trusted not to be compromised (whether by compulsion or straight up hacking) - you’re adding the IdP but not subtracting the individual developer. However, it is indeed decreasing the amount of trust you have to place in these entities, because of the transparency logs. I’m not sure whether I personally agree with this although I think I’m starting to warm up to it; essentially you’re trading off needing to trust this extra entity for the ability to theoretically catch any entity making bad signatures and serving them to a subset of users, plus the fact that this system is actually usable and PGP isn’t. (Transparency logs aren’t a major advantage unless the attack is targeted because even under PGP, if the malicious signature is visible globally then there’s your answer - the signing key is compromised.)

                  2. 3

                    Sigstore is the biggest one that I’m aware of. I’ve been involved with its Python client development1, with use by PyPI being the eventual goal.

                    1. 3

                      Thank you for that link. Will there be any alternative for people who do not want to be vulnerable to certain supply chain attacks this doesn’t prevent and do not want to be vulnerable to third parties?

                  3. 3

                    Thank you for this analysis. To fix it:

                    • Verify reproducible builds on the pypi servers and pip clients (this includes the source archive built from where people change the source e.g. a git repo) and optionally during packages their CI.
                    • Sign commits and reviews (either in the same repo or after merge with something like https://github.com/crev-dev/cargo-crev ). Put the public keys in the same git repo, so key changes are protected by signatures by the set of allowed keys ( https://gitlab.com/source-security/git-verify proposes a way to do this ). Then verify this at the start of verifying reproducible builds. This is also where insecure use needs to be rejected.
                    • Keep a history of package name transfers ( https://peps.python.org/pep-0541/ ) or removed packages or other compromises with signatures by the people who decide these for that namespace. Protect the history in the same way like package sources. Only allow breaking the chained history of keys in a package by this process or using a different package name.
                    • Secure this by an observed global append only log / binary transparency, make pip clients verify this.

                    This has the advantages that

                    • the security properties can be offline verified after the fact.
                    • there is a way to detect when the claim that was signed is wrong.
                    • this isn’t vulnerable to third parties.
                    • everything except the namespace is decentralised.
                    • explicit Web of Trust is not required.
                    • keys and their changes are transported inline and thus there is no need for keyservers.
                    • key rotation is supported without needing manual review at consumers.
                    • it allows co-existence in the same source repository of entirely different protocols for signatures than OpenPGP, like ssh signatures.

                    Conversely one should

                    • not support OpenID Connect, but instead use reproducible builds to verify the content.
                    • not enforce transport 2fa, but instead verify multiple reviews happened.
                    • not trust a central server to never be compromised
                    1. 9

                      Sign commits and reviews

                      My understanding is that in git, signing every commit is an anti-pattern/not useful – which even Linus seems to have said upon a time – and that instead one should (if signing at all) sign tags.

                    2. 3

                      or because the signature was present but had since expired

                      The fact that PGP considers a signature produced 2 years ago invalid now because the key expired last month is just one more example of how broken its design is.

                      1. 2

                        Wow, that was a swift response after the article describing the issues.

                        1. 2

                          Might be worth folding this into https://lobste.rs/s/av31pn/pgp_signatures_on_pypi_worse_than_useless

                          ping @pushcx

                          Edit thanks!

                        2. 1

                          Since a large part of your critique is focused on signatures signed by outdated keys, it occurs to me that implies that a secure use of public signatures would be to remember all the signatures you’ve made, and periodically update them, even if nothing about the software has changed.

                          I’m not sure that substituting minisign/ssh whatever the preferred signature tool du jour would make a difference in this regard; this is a shortcoming of build infrastructure.

                          1. 2

                            I understand this is part of the reasoning behind Rekor within Sigstore- a compromised key (due to old algos or leaks) shouldn’t be capable of creating unwanted signatures without being easily detectable.

                            Admittedly, Sigstore’s Fulcio only issuing keys valid for 10min means meaningful key compromise is far less likely than using long-lived PGP/SSH/minisign keys (you’d hopefully not request a certificate with an algorithm weak enough to be crackable within 10min anyway ^^;).

                          2. 1

                            @yossarian FWIW the first link goes to 7.7 “what’s DSA”, I’d assume it was meant to go to 8.2 “how large should my key be?”.

                            1. 4

                              That was intentional! The end of the DSA section claims that DSA is “well-regarded,” which it absolutely is not.

                              (The guidance around RSA 2048 is subject to some debate – I lean on the side of thinking that RSA 2048 is insufficient for signatures that are expected to last beyond 2030, but others believe that RSA 2048’s security margins are sufficient until RSA itself is more generally broken.)