1. 22
  1.  

  2. 11

    One example of this is that Google unilaterally decided to violate the letter of the HTTPS RFC (RFC 2818; versus update it) to require that X.509 certificates have a X.509v3 Extension for Subject Alternative Name. Reports against this breakage are closed with WONTFIX: https://bugs.chromium.org/p/chromium/issues/detail?id=700595

    Relevant section from RFC 2818:

    If a subjectAltName extension of type dNSName is present, that MUST
    be used as the identity. Otherwise, the (most specific) Common Name
    field in the Subject field of the certificate MUST be used. Although
    the use of the Common Name is existing practice, it is deprecated and
    Certification Authorities are encouraged to use the dNSName instead.
    
    1. 1

      This was one bandage that desperately needed ripping off, though.

      And in general, most PKI decisions seem to be made outside of RFCs anymore: left to their own devices CAs will do nothing; a combination of CAB baselines and aggressive pushing from the browsers seem to be the only way to drag them kicking and screaming into the future.

      1. 1

        This is an RFC about HTTPS (RFC 2818), not really PKI overall. It wouldn’t have been too hard to update it with the changes proposed.

        Also, for what it’s worth, the Chromium developers had to revert their change after they released it due to mass outrage.

        1. 1

          The correct way to rip the band-aid off is to update the RFC. It’s not too big an ask, really.

          1. 1

            That’s like saying “the right way to push HTML5 development is through the W3C”. That ship has sailed. Web PKI implementation standards are driven through the CA/Browser Forum.

            1. 1

              Writing an RFC draft is relatively easy and anyone can do it. See for example this RFC draft on IPv10 written by… anyone… https://tools.ietf.org/html/draft-omar-ipv10-06

              This is work that MUST be done by the Chromium developers anyway to announce their proposed change – otherwise they will break applications without notifying anyone.

              From there, getting the RFC approved, with such a trivial modification to RFC 2818 as to remove Subject DN CN validation would likely be quick. After all, you propose that everyone agrees this is the right way forward and no discussion is needed for a unilateral decision to be made by Chromium developers.

              For what it’s worth, Firefox implemented a change similar to this – but in a less breaking way (existing certs were still honored). The Chromium developers soon after making this change also discovered that their approach made EVERYONE mad, reverted it, and implemented Firefox’s approach.

              Maybe discussion should have taken place after all ?

      2. 9

        The article is quite right, but I’d add a fourth world to it: IoT. TLS is increasingly being used in situations that have very IoT specific needs and wouldn’t fit in Web scenarios at all. It’s not the same as just “non-Web services” because IoT and embedded systems have a whole set of requirements that largely overlaps within their domain but doesn’t generalize to anything that’s “non-Web”.

        1. 5

          I’d add “automated intra-cluster identity” for stuff where you have some setup, perhaps Kubernetes perhaps not, and all the clients and servers are working solely from that one CA.

          For the non-web protocols, generally I think it’s important to distinguish between “client-server” and “server-server federated”. For IMAP, POP3, SMTP Submission, etc, you can pretty much follow along with the web browser model. You just have to handle a different set of clients and different ages, and if you’re an ISP you’re more likely to have to support outdated clients with broken TLS.

          It’s the server-server federated model where things get Hinky. Traditionally you could use anonymous TLS without even a server cert. With DANE-EE anchoring on the public key instead of the cert, you can probably make that work with many peers today. The biggest problem is that the email specs require you to fallback to cleartext if TLS fails, so absent DANE or MTA-STS and effort to improve ciphersuites or remove SSLv3 or other such work will result in a net decrease in security. The best thing you can do here is setup DANE support and configure systems to use much more modern and rational minimum bars for any peer which declares that it does support TLS: once you remove the cleartext escape hatch you don’t have to worry about the counter-intuitive fallback.

          1. 1

            What’s the difference between that and the third world described in the link?

            1. 1

              I don’t recall now what my reasoning was at the time. There are useful distinctions to be made between the various scenarios in the author’s third world but the author’s broad overview stands.

          2. 2

            I’m currently deep within the third world described in the article, internal client/server TLS. Already being within a private network, it’s unreasonable to purchase a unique certificate for every server host on the network.

            My best two options seem to be:

            1. Dynamic self-signed certificates created at server start up. Publish certificate to centralized & trusted location that clients can read from.
            2. Distributing a single certificate to entire server pool, signed by an implicitly trusted internal CA.
            1. 4

              The standard approach seems to be an internal CA with some sort of automated certificate issuing mechanism (and often trusting only the internal CA, not any public CAs). This does require the automated CA stuff, but I believe there are open source projects for that. If that was too much work, I would be inclined to treat the situation like basic SSH, with a self signed certificate created on startup somehow (either centrally and then distributed, or locally and then published).

              (SSH can also use the ‘internal CA’ route, of course, with server host keys being trusted because they’re signed.)

              1. 1

                We do have an internal CA, so I will probably go that route to get maximum coverage at sites we host. Unfortunately, clients can choose to host themselves and therefore will not trust our internal CA, leaving them to their own devices.

                This service is very core to the company, so failing to form a secure connection means failing to ingest important data. I may end up having to go to a hybrid approach in the end.

                1. 1

                  At least for our product at work (cloud-first with on-prem option), the TLS scheme used in “the wild” sometimes meshes badly with internal CA’s used by the on-prem customers. The “stumbling block” is often browsers like Chrome, which can’t easily be convinced to trust an internal CA.

                2. 2

                  you want option 3, like @cks mentioned. Each service gets their own cert signed by your internal CA[1]. You would do the same with SSH[2] except obviously it’s by node for ssh instead of by service. Hashicorp Vault[0] will help manage all of this for you.

                  0: https://www.vaultproject.io

                  1: https://www.vaultproject.io/docs/secrets/pki/

                  2: https://www.vaultproject.io/docs/secrets/ssh/signed-ssh-certificates/