1. 39
  1.  

  2. 6

    I strongly encourage you not to do this: TOFU is a profoundly bad idea despite its wonderful ergonomics.

    The issue is that it fails in the exact case when it most needs to succeed: when someone is truly targeted by a MITM.

    1. 3

      I agree with this. If you take the time to look into it there is a number of better options depending on your situation. There’s SSHFP DNS records, LDAP (especially with freeipa) or if you use something like ansible, you can push ssh_known_hosts files out to clients.

      It is far better to get into a situation where anytime you see the message about an unknown signature, it is so rare that you actually stop to think instead of answering yes as fast as possible.

      1. 2

        SSH is very much used as a TOFU system by most users. Have you actually seen anybody compare fingerprints? I haven’t. This option doesn’t change it from being TOFU, but it does change it from trusting anybody, which was a default for scripts running on ephemeral systems. And I think we can agree that TOFU is still better than trusting anything.

        1. 1

          Have you actually seen anybody compare fingerprints?

          I do, and I have even written about how to receive host keys out of band. I don’t mean this as an attack on you, but I consider TOFU to be professional malpractice. For hobby stuff, maybe it is okay, but honestly the Internet is a deeply hostile environment: a lack of paranoia will hurt you eventually.

        2. 1

          It’s true. But its still good compromise.

          I, as popular deployment tool maintainer, see people complete disable host checking, or ignoring checking for fingerprints.

          Accept-new give sensible default which multiplied by big numbers will give best result.

          1. 1

            To look at it from a contrarian angle though, given that all security measures imply a costs/benefits trade-off it may be that the amount of time collectively spent by the human race to say “yes” (with or without actually checking the fingerprint) to that prompt is not worth the risks, unless password authentication or agent forwarding are in use.

            My understanding is that for the MITM to work the attacker needs to know that you’re connecting to a given host for the very first time, and time the attack accordingly. If successful, the worst that can happen is that you’re sent to a honeypot which, in order to fool you, would need to be built with a level of knowledge of the actual system that seems… unrealistic in most scenarios. So yes, by all means think about the implications of StrictHostKeyChecking="accept-new", but consider the advantages in your specific circumstances.

            1. 1

              If successful, the worst that can happen is that you’re sent to a honeypot which, in order to fool you, would need to be built with a level of knowledge of the actual system that seems… unrealistic in most scenarios.

              If the user is using a password or ssh-agent, the MITM could conceivably accept his connexion, then use his password or agent to connect to the target system, local privilege escalate to get root, then rewrite /etc/ssh/ssh_host* so the target’s keys really are the false keys, and finally install a small hidden service to tunnel traffic from the interception point to the suborned host and insert falsified logs and run fake sshd processes to make it appear that connexions terminated at the real sshd. It would take a fraction of a second.

              Honestly, it sounds a lot easier than I first thought.

              1. 1

                Sure, that’s why I excluded (in bold) password auth or agent forwarding. :-)

          2. 3

            Does anyone have a bash one-liner to parse https://api.github.com/meta to known_hosts format?

            1. 1

              If you want to grab keys in an automated way, use ssh-keyscan.

              Of course a MITM attach on either that or grabbing a web url can give you compromised keys so you don’t want to be refreshing this regularly unless changes alert a human. Keys stored in your own git repo that are used as part of a deployment is a lot better than doing a fresh scan on each new deployment or leaving it for TOFU for each user.

              1. 1

                At least with curl, you presumably can trust the tls certificate.

            2. 2

              Interesting, what do you mean by this is a better compromise for scripts? I’m not sure I see where this would be much different in that context.

              1. 2

                I’m working on a deployment tool https://deployer.org/ and for example, if you want to use git and clone repo for the first time (from example from CI) you need to manually login into the server and run the ssh command to github.com to update knopwn_hosts.

                With accept-new this workflow is automated and no manual setup is needed.

                1. 1

                  I imagine it’ll be better for scripts that issue multiple SSH commands. You can verify the remote end hasn’t changed host keys between the two (or more) invocations of SSH; whereas with no you just accept whatever the host key is whether it changes or not.

                  You can’t tell if the host changes between script runs but you can be sure the host hasn’t changed during the current run.

                  1. 4

                    I solve this in CI by putting the host’s fingerprint in a variable and writing that to known_hosts. I would think the odds of a host key changing in between commands of a job would be tiny, and the damage could already be done.

                    It’s still “trust on first use”, but that first use is when I set up CI and set the variable, not at the start of every job.

                    1. 3

                      I think this is the correct way to do it, I do this as well for CI jobs SSH-ing to longer-lived systems.

                      If the thing I’m SSHing into is ephemeral, I’ll make it upload its ssh host public keys to an object storage bucket when it boots via its cloud-init or “Userdata” script. That way the CI job can simply look up the appropriate host keys in the object storage bucket.

                      IMO any sort of system that creates and destroys servers regularly, like virtual machines or VPSes, should make it easy to query or grab the machine’s ssh public keys over something like HTTPS, like my object storage bucket solution.

                      I guess this is a sort of pet peeve of mine. I was always bugged by the way that Terraform’s remote-exec provisioner turns off host key checking by default, and doesn’t warn the user about that. I told them this is a security issue and they told me to buzz off. Ugh. I know its a bit pedantic, but I always want to make sure I have the correct host key before I connect!!! Similar to TLS, the entire security model of the connection can fall apart if the host key is not known to be authentic.

                    2. 2

                      Unless you’re clearing the known_hosts file (and if so, WTF), I don’t see why there would be a difference between consecutive connections within a script and consecutive connections between script runs.

                      1. 4

                        Jobs/tasks running under CI pipelines often don’t start with a populated known_hosts. Ephemeral containers too. Knowing you’re still talking to the same remote end (or someone with control of the same private key at least) is better than just accepting any remote candidate in that case.

                        Less “clearing known_hosts” file, more “starting without a known_hosts” file.