1. 49
    1. 23

      I have a nagging feeling that I’m missing something here. It doesn’t seem right that such an obvious solution would have been left on the table, by everyone, for decades.

      Browser vendors.

      They’re Why We Can’t Have Nice Things; they refuse to add UI for basic HTTP and TLS-level features and force everyone to roll their own replacements at a higher level that tend to suck. Imagine if browsers implemented HTTP Basic Auth in a way that didn’t look like it was straight out of 1996 … how much pointless code wouldn’t need to be written.

      1. 8

        Managing a custom CA and renewal and everything is a serious pain worth avoiding. Especially when dealing with errors for non-technical users. UX is terrible and keeping a secret file was asking too much of many people. That’s why https + username + password won in e-commerce. Lowest friction.

        Enterprises are different of course. Less users to worry with. Centralized specific documentation for a reduced set of supported client software.

        1. 1

          Especially when dealing with errors for non-technical users. UX is terrible and keeping a secret file was asking too much of many people.

          I can see why you wouldn’t want to use your Mozilla hat on this post..

          1. 2

            what do/did you see? Curious to hear if our understanding aligns.

            Parts of this thread are about browsers but my experience and my comment isn’t. I co-managed a tiny CA with computer security students about 10 years ago. Failure mode was hard and breaking assignment labs is a lot of bad stress. I don’t wanna know what it’s like with paying customers.

            I haven’t done any crypto related stuff at Mozilla. Mostly focusing on web/browser security. Doesn’t really make sense to use the hat, don’t you think?

            1. 3

              You state that https + username + password won because they’re the lowest friction, and you’re right. You also state that this is because of (among others) bad UX with other solutions. You’re also right there.

              Bad UX is a browser problem; no browser has done any serious work on a generic authentication UX. Basic Authentication in Firefox still presents a dialog box that looks like it’s made in the 90s. Client side certificate management is cumbersome, and using client side certificates is hard. These are not technology problems, these are UX problems.

              Our situation would be better, considering both security and UX, if browsers made authentication a first class citizen. Web developers would have it easier, users would have a more consistent experience and we would not have so many custom broken login implementations, because in that timeline letting the browser handle the authentication would have been the solution with lowest friction.

              Because of this, I see browser vendors as a big part of the problem, hence my remark about you not wearing your hat. Mozilla made a step in the right direction a while ago when they announced Persona, but it’s been discontinued for longer than it has been alive now.

              1. 1

                Whatever blame you’re trying to throw, it won’t stick. I’m not your crypto/logins guy. Anyway you might wanna try WebAuthn to solve this properly? Doesn’t have the tracking issues too.

      2. 4

        “Why use existing layers as a basis to the layer above while we can replace layers below with extra layers put on top”

        We are so much used to this scheme that it looks familiar everywhere we go.

      3. 1

        Agreed: it’s a “nice” solution from a system design standpoint but sometimes IRL I re-open a browser window and 5 tabs each suddenly need my PIN number, one after the other, or they never default to the right cert, &c. Plus even when it works, it can be super laggy.

        If the user agent was a more effective key agent, it would be great!

    2. 13

      Client certificates are an important piece of the way the Google corporate network is setup. Each client machine has a cert that it presents to a proxy that checks it and allows access to the rest of the network from anywhere in the world.

      https://cloud.google.com/beyondcorp/ and https://research.google/pubs/pub45728/ if you like whitepapers.

      1. 3

        Google, and many other corporate networks.. Using client certs for authentication has been around for quite some time.

        1. 1

          Good news! All it takes now is taking it from end-to-end managed environments like corporations or universities in a way that everybody can use.

        2. 1

          DoD CAC is a good example as well.

    3. 10

      Almost all Spanish government web apps work with client-side certificates, which are generated by the government itself for every citizen that request them. You use them for taxes, health insurance, driver licenses, etc. I use them just fine BUT most non-technical people think this workflow is complicated, so they eventually added more traditional login systems that rely on User/Password.

      1. 11

        [Spanish citizen here]

        I think all of the technical issues people has with this approach would be solved easily if browser vendors cared enough to make a decent UI to set the certificates up. The process right now is quite cumbersome.

        Also a lot of people looses their certificates and they are annoyed when they need to go in person to obtain a new one, despite this being the absolute safest way to emit certificates.

    4. 4

      Funny, this showed up right next to this video on gemini in my feed. I am by no means an expert on gemini or TLS, but it looks like client side certificates through TLS will be the primary way to authenticate users using the gemini protocol, so at least one other group of people has thought this to be a good idea. I must say it seems like an elegant and simple solution to the problem.

      1. 5

        That was my idea. I wrote the first Gemini server and because I was using libtls from LibreSSL it was trivial to support client certificates on the server. The current Gemini specification is a bit … wonky with the client certificate stuff and it’s being hashed out right now on the mailing list.

    5. 4

      Completely agree that client certs are long overdue. For people interested in the topic: maybe have a look at Redwax which is trying to make use of client side certificates a lot more convenient. It consists of a series of modules for the Apache HTTP Server that can be combined together to form various types of certificate authorities.

      And for an even more comprehensive approach, perhaps also TLS Pool could be interesting. This attempts to handle the whole TLS flow (including managing client side certificates) through a daemon, so normal applications don’t have to bother.

    6. 3

      This is probably not the reason, but it’s still something worth keeping in mind: X.509 certificates are in their default state large because RSA keys are large. These examples use DER since PEM is just a thin base 64 wrapper around DER.

      $ openssl req -nodes \
          -newkey rsa:2048 \
          -keyout x.key -out x.crt \
          -days 365 -outform DER -x509 \
          -subj '/CN=API client #1337'
      $ ls -l x.crt
      -rw-r--r-- 1 xh xh 795 MMM DD TT:TT x.crt

      And you have that overhead on every connection, even more if you have a certificate chain. Plus the overhead of actually doing the asymmetric cryptography, which may or may not be faster or slower than whatever people do on the OAuth backend depending on the exact loads involved. OAuth tokens are substantially smaller.

      You can at least alleviate this to some extent by using elliptic curve keys, but then you’re off the beaten path. What’s more, ECDSA is notoriously fragile.

      $ openssl genpkey -genparam \
          -out ec.param \
          -algorithm EC \
          -pkeyopt ec_paramgen_curve:P-256
      $ openssl req -nodes \
          -newkey ec:ec.param \
          -keyout x.key -out x.crt \
          -days 365 -outform DER -x509 \
          -subj '/CN=API client #1337'
      $ ls -l x.crt
      -rw-r--r-- 1 xh xh 399 MMM DD TT:TT x.crt

      The actual public key is just 65 bytes (04 to indicate uncompressed key, 32 bytes of x-coordinate and 32 bytes of y-coordinate); compression isn’t widespread either due to patent issues that only somewhat recently got resolved by patent expiry. This means that there are 334 bytes of overhead, a lot of which is related to ASN.1/DER encoding and other X.509 metadata.

      RFC 7250 lets you use raw public keys in place of certificates (RFC 8446, section 2, p. 13 for TLSv1.3), but support is not very widespread, you’re very much off the beaten path and have no way to indicate expiry other than manual revocation. And you want to be on the beaten path because otherwise you’ll probably run into some issue or another with implementation support. Certificates for EdDSA keys (yay!) theoretically exist, too (RFC 8446, section 4.2.3, p. 43), but you can basically pack it up if you need to interoperate with anything using either an off-beat TLS library, anything in FIPS mode or anything older than two years.

      1. 5

        This means that there are 334 bytes of overhead

        I have a solution: remove all the tracking cookie junk we’re getting forced on us and add this instead, win-win! Browser-controlled session cookies for first-party connections only could be so very good…

    7. 2

      Yup. I’ve felt this way for a long time. We’re adding client cert support to Couchbase Lite right now … dealing with X.509 and TLS APIs is a PITA, but it’s worth it.

    8. 2

      Worth mentioning the WebAuthn protocol (evolution of FIDO) as it supports things like fingerprint readers on the users devices.

      For most end users, that would be better and easier than Client side certificates.

      Granted, this does not solve the same use-case as Oauth, but given that there is a lot of discussion on authentication methods here, I wanted to mention WebAuthn.

    9. 2

      Many banks that I’ve dealt with in eastern Europe didn’t support username/password, just straight client certs. It was a pain to transfer or authorize a new computer, so it doesn’t work on a shared host. There were a few times I had to re-dual boot because the cert was somewhere else. I’d be interested in this being combined with something like a password manager…

      1. 1

        Don’t things like Firefix sync also sync the certs? If they don’t, they should.

    10. 1

      awful OpenSSL CLI to generate a CSR

      mbedtls_gen_key type="ec" filename="privkey.pem" format="pem"
      mbedtls_cert_req filename="privkey.pem" output_file="request.pem" subject_name=CN="example.com"

      MbedTLS lacks extended Key Usage for now (does feature Key Usage though)… but works just as nicely for that.

    11. 1

      A simpler version, with less flexibility would be, as a client, to generate a CSR and get a certificate against the API you want to query.

      This means that APIs could even buy a trusted intermediate CA from trusted roots (although it’s expensive) and sign client certificates with the rights in the meta too.

      As a client, I just need to use the client cert (and API CA cert) to have a secure connection. I can receive a notification from the API owner that my cert will expire and just regenerate a CSR to my apps.

      Then the burden of keeping private the CA isn’t on the client but on the API owner side.

    12. 1

      Likely worth mentioning that https://github.com/smallstep/ exists and would be worth a peeker.

    13. 1

      My best guess about why OAuth took off while client certificates did not is that one can implement the client side of the former in JavaScript and run it in a browser, but probably not the latter.

      1. 6

        Every client implementation of OAuth which has no server-side component is leaking their secret key, which is a Bad Thing you are Not Supposed To Do.

        1. 3

          With OAuth2, app secret keys should not exist for public clients (no matter if web or native, keys can be extracted either way).

          The old way of implementing public clients was implicit grant; now it’s PKCE (RFC 7636).

        2. 2

          I think it’s important to point out that this was a limitation of OAuth, but not a limitation of OAuth2, which is what most implementations use these days. I am assuming @jimdigriz is assuming OAuth2, and you OAuth 1.0. Are my assumptions correct?

          *Edit—I am wrong here. I forgot that there’s a client secret possible for some OAuth2 flows as well.

        3. -1

          Not all client implementations do https://gitlab.com/jimdigriz/oauth2-worker

          1. 10

            This is a placebo of the worst order, a “security” library which is nothing of the sort. This will still leak your client secrets, it is logically impossible not to if you’re completing the OAuth flow from the client side.

            1. -1

              So instead of walking through the methodology in there, you just decided to make wild statements.


              1. 9

                I did read it, and I understand what it’s trying to do, and what it’s not doing is keeping your client secret a secret. It’s keeping it from other JavaScript, but anyone on your page can find the secret by popping up the network panel and watching the request go out. Try it yourself.

                This library should come with a big bold header telling you that it does absolutely nothing at all to keep your client secret safe from an adversary who knows where the “view source” button is, and that it would be impossible to do so effectively.

                Client side security, isn’t. This is a cold, hard fact.

                1. -2

                  Pray, do tell, what server side componment would fix this attack vector?

                  1. 11

                    If you put it on the server, then you don’t have to send it to the client! The whole point of the secret token is that you keep it on your server, behind your firewall, and then issue the request to obtain the exchange token there. NOT in code that you’ve sent to the client to do whatever they want with.

                    CLIENT SIDE CODE IS NOT SECURE.

                    1. 1

                      What does the client use to authenticate its-self? What is it that the client communicate to authenticate its-self. Your statement of ‘put the token on the server’ is nonsensical as the problem is authenticating a user-agent.

                      No one gives a damn what an untrusted client does with a token, it is authorised for whatever its allowed to do, no more, no less.

                      You also seem to be trying to add substance to your argument by making out this secret works like a root credential, which is an implementation detail and irrelevant.

                      Your arguments read as if you need to swot up on you AAA.

                      1. 8

                        I don’t follow your line of questioning, but from the linked repo.

                        client_secret [optional and not recommended]: your application secret

                        SAP’s are considered a public (‘untrusted’) client as the secret would have to published making it no longer a secret and pointless

                        This is what you risk losing, not the access token. The application secret has to be stored somewhere and if there is no server-side component then its on the client, which like the documentation says is insecure.

                      2. 4

                        No one gives a damn what an untrusted client does with a token, it is authorised for whatever its allowed to do, no more, no less.

                        Do I understand correctly that you actually don’t care that an untrusted client has access (even limited) without you knowing it?

                        Why do you need oauth2 then?

                      3. [Comment from banned user removed]

                        1. -4

                          God I pity those that have to work with you. Rockstar coder I assume?

                          1. 21

                            Folks, let’s all try and be civil and charitable here.

      2. 2

        Over in the 802.1X world, EAP-TLS (client certificates) is mostly a pain not at initial provisioning, but renewal of the certificates and the UX around that. Though a mostly awful standard, EAP-FAST tried to address this (including provisioning) and now things like TEAP are coming out the woodwork.

        1. 1

          EAP-TLS of course is fixed by having irresponsibly long validity times on the certificates, in the hopes that the user tosses the device before the certificate expires.. And then you hope your user comes to you for getting a new certificate instead of reusing the one for their old device..