1. 23
  1. 19

    Am I stupid or is this post essentially just, “if someone gets access to your authentication tokens they can impersonate you”? Isn’t this expected? Is your threat model one in which a user who leaves their machine unlocked and logged in at a café should still be safe somehow? How would that work?

    And how is that specific to bearer tokens? If I leave my machine unlocked at a café, someone can take my SSH private key too; that’s not a bearer token. Why is it worse if someone gets my bearer token than if someone gets my SSH private key?

    1. 6

      “if someone gets access to your authentication tokens they can impersonate you”?

      Yes, that seems the complaint.

      Isn’t this expected?

      Currently yes, but I think that’s the idea the post complains about. It would be great if it wasn’t expected.

      SSH keys can do slightly better. A file based private key without a password is not better than a bearer token. However we can go a bit further. Hardware tokens elevate the guarantee to “you haven’t stolen the actual hardware”. (Yes, you could tunnel the messages, but the driver could also validate the client, so that’s… complicated) This is similar to the idea of secure enclaves in the post.

      1. 5

        So what’s the solution for an authentication system where 1) I don’t need to re-authenticate (with a password, a hardware token, biometrics, whatever) for everyaction; and 2) I can safely leave my laptop unlocked in a café without “logging out” anywhere (which is the example proposed in the article)? What alternative to bearer tokens could possibly that?

        1. 5

          A tpm where the keys are write only and the computer has no unsigned writable persistent storage. Where the only thing of value on the machine is inaccessible proof of identity that allows you to to access someone else’s machine.

          A thin client for the cloud, effectively.

          I don’t think I want to own such a machine.

      2. 1

        Why is it worse if someone gets my bearer token than if someone gets my SSH private key?

        That depends what you can with your SSH private key and with that specific Bearer token…

        At least with SSH you could improve the security of your private key (somewhat) by having it encrypted with a passphrase on disk (default) and/or (somewhat more) by using a FIDO2 hardware token, this is not so easy with a Bearer token as those are supposed to be usable without “user presence”.

        1. 8

          So the complaint doesn’t have to do with bearer tokens; the complaint has to do with any system where the user doesn’t have to re-enter their password (or re-authenticate through other means) with every action.

          1. 4

            On the orange site mjg59 expands on his post thusly:

            Hypothetical: I have a Github Enterprise org. Users log in via my identity provider to gain access. Github then issues a long-lived oauth token to the Github Desktop app. An attacker compromises a user’s laptop and copies that oauth token. That attacker now has access to all my private repos until I notice and revoke it.

            Finding a solution to this particular problem without destroying the UX would be very nice!

            1. 1

              What about a rolling code system (similar to RKE used for vehicle fobs)? If someone steals & uses your key state, then your next code won’t work (because the attacker already used it). At that point you are forced to re-authenticate and invalidate the stolen state.

              1. 2

                I have a laptop and a desktop.

                I reload and my connections get lost.

                1. 1

                  Your laptop & desktop would not be sharing the same token/key.

                  1. 1

                    Doesn’t fix that sometimes connections crash. How do you verify you received the new token other than using it?

                    1. 1

                      I don’t understand your question. Can you provide a concrete scenario?

      3. 15

        “Bearer token”, here, seems to have the same meaning as “capability”. Capabilities are generally considered a Good Thing.

        The article is complaining that once a client holds a capability the server can’t do finer-grained authentication, to make sure someone else didn’t copy the capability or steal the device … but isn’t that just asking for another layer of authentication on top of the capability?

        How does this not turn into an infinite regress of authentication, where you need layer n+1 to prove the client didn’t unlawfully get access to layer n?

        1. 3

          Thank you! Any time anyone presents a capability system, you can usually make them sad by asking ‘how do you handle revocation?’. It’s the hard problem for any capability system (for ACL-base systems, ask how they handle delegation). There are a lot of solutions that all have different tradeoffs. The ones valid in a distributed-systems setting are:

          • You can add a layer of indirection, so that the token that the user provides is something that allows you to query the ‘real’ capability. In the web case, you’d get a token that lets you run a query to say ‘is this user authorised’.
          • You can add a separate revocation mechanism, along the lines of certificate revocation lists, that someone accepting the capability must periodically query.
          • You can use temporal bounds, such that the capability automatically expires and the user must re-request it periodically (Kerberos does this).

          You can combine these in various ways, to trade off the total number of round trips versus the damage that a compromise can do. For example, if every capability is bounded to no more than a week’s validity, then you bound the maximum size of a revocation list that anyone needs to check. You can use the indirection layer to provide temporal bounds, by deciding that you’ll keep the ‘is valid’ value cached for an hour (rather than checking on every request) and accept that if someone exfiltrates the capability then they can compromise the system for an hour (or longer if the user doesn’t mark the capability as invalidated).

          The first approach is often done with the layer of indirection existing in a trusted module on the client, such as a TPM or U2F token. In these cases, you require that the capability is signed by a private key that is physically protected. You then need to handle revocation only of the key (if the device is stolen).

          1. 3

            You are exact and correct. Instead, modern capability systems allow revocation of some sort, either as an easy pattern or as a primitive building block.

          2. 2

            This is somewhat confusing authentication with device binding. It’s easier and more convenient to separate these into layers.

            JWTs allow for authenticating requests and enriching them with information from IDPs such as scopes and roles without requiring coordination. If your concern is that device being compromised to the point of breaking the transport layer encryption, there are ways to bind that token to devices such as additionally signing the requests with e.g. a hardware token or certificates from a TEE.

            1. 2

              Defence in depth is the only way:

              • A way to identify and limit the harm of tokens that do leak(audit logs, user notifications, blacklists, expiration, etc)
              • Attempt to tie tokens to devices, where possible(device ID’s, etc).
              • etc

              Bearer tokens don’t have to be awful, if you have reasonable controls and you control the authentication. Letting Github, Facebook, etc control your auth, except for in trivial auth barely matters places is usually stupid.

              It’s a balancing act and every service will have their own balance of security and convenience.

              1. 1

                one of the core ideas of Zero Trust (that we verify that the client is in a trustworthy state on every access)

                This is news to me. My understanding is simply never trust the client, am I missing something?

                1. 6

                  You may enjoy reviewing the original BeyondCorp paper: https://research.google/pubs/pub43231/