1. 22
  1.  

  2. 7

    For anyone interested, I did a write-up below of my methods and categories of obfuscation with plenty of references to follow as usual.

    https://lobste.rs/s/psw6us/obfuscation_for_security_techniques_for

    1. 3

      I think you touch on it in the other thread, but in my mind, obscurity is always a likelihood modifier. When I rank risk, I talk about

      • does an attacker require special tools or skills?
      • can the attacker gain intimate knowledge of the application/system elsewhere?
      • can the vulnerability be uncovered via some other mechanism?

      obscurity is a pretty valid likelihood modifier; it’s not a security control in and of itself, but it decreases the likelihood of discovery & successful exploitation.

      1. 2

        All true. I just don’t know whether I want to use a different term since overall security is always probabilistic. There are specific mechanisms that have high-assurance of doing specific kinds of things. Past that, we’re constantly modifying likelihoods in this area or that whether traditional mitigations or obfuscations.

        1. 2

          I use NIST 800-30-style risk ranking, so it’s a combination of Likelihood x Impact; for example

          The likelihood is low for the following reasons:

          • Attackers must transit the firewall via another exploit, or be a privileged attacker.
          • Attackers must then determine the custom RPC mechanism.
          • The RPC mechanism is bespoke and not documented outside of the company, further lowering likelihood.

          The impact is high for the following reasons:

          • An attacker with this level of access would have direct access to internal APIs.
          • The system’s APIs contain SPII and financial data on large number of customer users.
          • Leakage of customer SPII or financial data could have wide-spread regulatory, reputational, or financial impact.

          Then push it into the matrix and you get a “Medium” overall, or whatever.

          1. 2

            If doing it that way, one other thing I’d do is note how often attacks happen on specific components or types of mitigations. Reason being I’m making predictions as I make claims like that. Then, adding the numbers lets them check my math so to speak on likelihood. A good evaluator will prefer having something real to work with. I’m outside the industry right now, though, so can’t say what current preferences are.

            1. 2

              Depends on the client really; for government, many have requested NIST 800-30 with specific modifiers, or their own risk ranking. In finance, it’s a mixed bag: some prefer write ups, some prefer numbers. I go with what clients want, but many are happy with NIST 800-30, and it’s a standard to point to (and old, initiated in 2002); I’ve not seen too many complaints in my 10ish years in sec, when talking about risk justification. Obviously backed up with technical details. Works nicely.

              1. 2

                Appreciate you sharing your experiences on that.

    2. 4

      Another option: just don’t look at your auth logs.

      My server gets thousands of preauth failures a day, because the bots don’t even bother attempting a curve25519 key exchange. If I moved my port to 24, I would still have thousands of useless packets arriving every day, they just wouldn’t show up in my logs.

      I am probably vulnerable to a talented, targeted attack, so it makes sense to allocate all of my security-related-energy/motivation on limiting that risk, not on fussing about my auth logs.

      1. 2

        Yeah but the packets will get dropped immediately instead of going through a handshake and bothering your sshd process.

        I just drop all packets from non-US IPs, and permanently ban any IP that tries to connect to my server more than once in 10 seconds. Is this valuable? Maybe, maybe not. But it was a fun way to learn more about pf.

        1. 2

          In what way would my sshd (which uses ~0% CPU on average) be bothered by performing its primary function?

          1. 1

            It wouldn’t really, it’s more of a mental / perfectionist thing.

          2. 1

            The ones to worry about are foreign actors using compromised hosts in the USA

        2. 1

          Obscurity is fine against the bots, but you’re still vulnerable against targeted attacks.

          1. 6

            The article is quite clear that obscurity is, at most, something extra on top of something that is already good.

            Obscurity can be extremely valuable when added to actual security as an additional way to lower the chances of a successful attack, e.g., camouflage, OPSEC, etc.

            1. 3

              It stops most targeted attacks if they can’t figure out what you’re using. Just don’t advertise it, get workers to sign NDA’s on it, pay for it with little traceability, and so on. In many cases, the obfuscation can be tiny change to protocol or implementation of something important. Nothing even has to change hands. For low-volume servers, I used to like deniable, port knocking combined with PowerPC processors that advertise themselves as x86 boxes. EOL’d PPC stuff from Apple is still available on eBay to this day with Freescale making nice CPU’s and boards. Just gotta make sure whatever board you get has up-to-date BSD’s or Linux available.

              1. 1

                That’s a cute trick. Just as jump boxes?

                1. 1

                  I had one group with minimal IT needs on PPC Macs as desktops. Those I know doing it say they havent had problems since maybe 2010 when I originally recommended it.

                  Now, for people who will do updates and so on, I say use BSD or Linux. Not just as jump box but for anything the hardware supports. The thing is that I’ve never seen attackers try a non-common CPU. Ever. There was one on HN that said their group did multi-CPU malware but that be ultra-rare or straight-up lie. Btw, ARM is a no-go for this technique thanks to mobile.

                  There’s another level of it in CompSci where the CPU instructions themselves are randomized or just those in app binaries w/ CPU decrypting. I stopped at hiding what CPU was used since I got more interesting things to do with custom CPU’s.

              2. 2

                Also I think there’s plenty of obscure things you could do that at least help against targeted attacks (in addition to actual cryptography). Consider a server that puts the ssh traffic through a vigenère cipher or something. Who’s going to realize that a cipher text’s letters have been shifted?