1. 12
  1.  

  2. 7

    tl;dr:

    Google is “strong arming” (by threat of blacklisting) Certificate Authorities to comply with a “Certificate Transparency” program that Google has pushed through the IETF (Internet Engineering Task Force).

    The “Certificate Transparency” (hereon referred to as “CT”) program requires that all issued certificates are logged with 2 separate CT servers that is publicly auditable by anyone. The premise being that we can’t prevent Root CA’s from being compromised, but we can do the next best thing, which is to prevent errant certificates from working at all in the popular browsers (starting with Chrome).

    If a certificate is used by a server that doesn’t appear in the 2 CT logs, then Chrome will show a bad certificate warning in Chrome (the same way they show a warning for expired or self-signed certificates today).

    The other 3 major browser makers (Mozilla, Apple and Microsoft) have yet to comment on whether or not they will follow suit in using the CT logs to blacklist errant certificates.

    Here’s Slate’s closing remarks:

    There’s really no chance that consumers will ever know enough about these obscure systems to push Google one way or another. So for now, there’s little to stop the company from redesigning the internet’s critical infrastructure however it wants.

    1. 3

      I think part of the lack of comments from the other browsers is that they like the changes, but figure they don’t have the market share and/or clout to push changes like this through. So if Google succeeds, hooray, we’ll follow their lead. If Google fails, then no sweat of our back, only the Chrome team has egg on their face.

      Personally, I’d like this initiative to succeed. The biggest concern with TLS was always that every CA could issue any certificate ever and no one could double check that they are behaving. Now that Chrome has a huge dominant position (60% globally I think), they are forcing CAs to behave.

      1. 1

        I got scared by the title, as huge companies “improving security” often means screwing over hobbyists (i.e. SecureBoot, locked down phone bootloaders). Relieved to see that it’s just forcing CAs to behave better.

      2. 6

        Since I was quoted in this article, here’s a followup on the context of one of those quotes:

        I remember now, reading your article and the reference to complexity, the thread that I was on when I said “No sane engineer would design it that way”.

        The point I was making is that complexity equals (roughly speaking) centralization.

        The more complicated you make a system to run, the fewer people can do it, and the more people are forced to rely on a trusted third party.

        How did email, a fundamentally decentralized system, become so centralized with most people using Gmail? The answer is: company, university, and home sysadmins got sick and tired of meeting the ever increasing and changing requirements for running an email server.

        The same thing will happen with Certificate Transparency. The number of CAs will go down. More power will go to Google. Google will be viewed as the “protector of the Internet”, and as with all “protectors”, they end up becoming the #1 threat to your security.

        1. 1

          The more complicated you make a system to run, the fewer people can do it, and the more people are forced to rely on a trusted third party.

          That’s actually a good point that I’m not sure I’ve thought about on QA or security. The reason might be that you always need a trusted, third party to evaluate a solution like CA software no matter how simple it is. Also that high-assurance security mandates simplifying systems (“design for verification”). The screwups can be tiny, innocent-looking one-liners. So, I think it’s a moot point for avoiding 3rd parties for security-critical code as you always want them at least on review side with it applying double for esoteric stuff. The point can apply for correctness or reliability in presence of common faults as complexity always hurts verification if complex aspects aren’t already verified (relying on verified tools/libraries). Especially if the problems you want to avoid can be provably absent in an automated way so long as specific techniques are used in simple apps. SPARK and Rust come to mind.

          Additionally, the overhead and problems of P2P networks cause people to rely on a trusted, third party anyway. It works well enough for them in the vast majority of cases. That’s all most people want. So, there’s always that biasing them toward centralization.

        2. 5

          (disclaimer: I work for Google, not on this, I speak for myself only, I may have had some beer, blah blah blah)

          SSL is already centralized with a few big vendors (browsers, OS vendors) deciding what CA roots to include. Before CT we largely had no idea what CAs were issuing. Now that we have some idea we’re seeing that some of them aren’t keeping their side of the bargain. It’s the responsibility of the people who ship lists of trusted CA roots to their users to use whatever means they can to ensure that the CAs they ship are trustworthy. CT seems like a good tool to help achieve that.

          1. 1

            Now that we have some idea we’re seeing that some of them aren’t keeping their side of the bargain

            What’s “the bargain” and what’s “their end”?

            1. 1

              The bargain is that if CAs issue certificates that are authorized by domain / organization owners, that use secure protocols and that don’t contain lies - and that they have regular audits to that effect, then they can sell this service and have their root certs in popular browsers and OSes.