1. 50
  1.  

  2. 9

    I worry about commenting because I feel there might be self-confirmation bias, but I feel the author’s observations are pretty spot on based on my own experiences. The biggest take away seems to be KISS and maybe “use monorepos more” for me.

    The part about “things seem to be more secure after 2012” is interesting too. Anyone have their own theories? I can’t think of anything immediately. It seems this is around when TypeScript became a thing (source: Wikipedia): “1 October 2012;9 years ago (2012-10-01)”

    1. 8

      The part about “things seem to be more secure after 2012” is interesting too.

      it’s definitely security awareness in frameworks, languages, and tools being exposed to devs/engineers directly rather than being part of some secret arcane vault held by security. Around 2012 is when we started to see more and more frameworks being used, and those frameworks attempting to fix whole classes of issues. For example, ORMs largely obviate the need to manually do escaping for SQL queries, and better build systems help us spot functions that introduce the risk of heap/stack overflows/reads/writes. We basically started to fix things at the source rather than one offs per project/application/input.

      Do those frameworks still have issues? absolutely! However, now fixing them fixes all the places using them, rather than attempting to find each individual issue on their own.

      1. 4

        I’m not sure about the monorepo takeaway. I think monorepos make it easier for the audits but are not necessarily connected to better security or business success. One could argue that “if audits are difficult there will be more missed vulnerabilities”, but I’d really want to see that explored in practice than as a thought experiment.

        There are also organizational / people issues with monorepos that are nothing to be scoffed at IMO.

      2. 7

        Generally I think this was pretty good, but two four points stuck out to me:

        Our highest impact findings would always come within the first and last few hours of the audit.

        I agree about the lowest hanging fruit shaking out early, but my highest impact findings almost (but not always) come last during an assessment, when I have a much better understanding of the codebase (unless I’ve been there before and know where to look for the problems). edit I just realized it sounded like I was disagreeing, but I wanted to emphasize how much of a difference there was between the two. The things you find later tend towards higher impact and perhaps lower likelihood of discovery, and tend to be those sorts of brutal, dragging findings that might just have to be mitigated rather than remediated.

        All the really bad security vulnerabilities were obvious.

        Completely disagree on this point; sure, you’ll find some like the author mentions, where a token is returned that shouldn’t be or there’s a painfully obvious XSS/SQLi/IDOR, but often it’s the interaction between systems that are the worst problems, and those are non-obvious to discover. I do agree that often it’s nothing clever; I jest that most of my work is just “breaking applications with a child-like sense of wonder and terrible code,” but to say they were obvious is a bit of a leap. I remember one codebase I audited, in the millions of lines of code, that had a piece of vulnerable code that had been overlooked for almost 15 years. Once discovered it was “obvious” how to exploit, but the chain from the frontend to the backend had never been discovered (and indeed, once I had discovered it I found five more places in relatively quick succession).

        This is also an interesting point:

        Discoverability is everything, when it comes to actual exposure.

        There are lots of papers that talk about likelihood, and I’m not sure I agree here either. For example, I’ve trivially discovered that the only thing protecting a system was UUIDs… so I just needed a UUID weakness or source of system’s UUIDs to break it. Generally when thinking about likelihood it’s a combination of:

        • likelihood of discovery: how easy is it for an attacker to actually find this weakness within the system?
        • likelihood of exploitation: how easy is it for an attacker to actually exploit this vulnerability?
        • (and on occasion) likelihood of previous occurrence: how likely is it that this has previously been exploited?

        The first two aren’t always obviously correlated; for example in my IDOR above, if a system is using UUIDs, the discovery is trivial because I may have two accounts provisioned that I can use, but the actual exploitation is far from trivial. The author hints at this, but it’s far from simply connected that “discoverable == exploitable” in many scenarios. Now, would I avoid writing up a system using UUIDs because the actual exploitation is difficult? No, of course not, but it would lower the overall severity of the weakness; severity being a factor of how likely the issue is compounded by how impactful the issue is, along one of any of the various scoring rubrics that clients & companies use.

        Lastly:

        You could easily spend an entire audit going down the rabbit trail of vulnerable dependency libraries.

        This is usually why companies generally scope dependencies out of an engagement, beyond first order provably exploitable code within an app. For example, sure, a codebase might have a vulnerable jQuery or system library or the like, but if there’s no direct call to it, we’re not going to spend much time looking at it.

        1. 2

          Secure-by-default is a super important principle at all levels, from an OS kernel up to UX design.

          The example of React auto-escaping HTML is a good one, and a rule I wish all template engines followed (I’m grudgingly using one that doesn’t right now (Inja) because choices are few in C++, and I’m nervous about it.)

          Always-encrypted network traffic is another, though the design of TLS (conflating encryption with PKI-based authentication) makes this harder than it should be — you either have to bake in support for LetsEncrypt, or default to a self-signed cert and let the client deal with the auth hassles.

          1. 1

            There is TLS with pre-shared keys (TLS-PSK), which is IIRC used by Zabbix, making it very easy to setup.

            1. 1

              I was not aware of that … looks a bit simple than running a private CA, but it’s not secure-by-default because you have to install the shared key first, right?

              1. 1

                In case of Zabbix, I believe there can not be such a thing a secure-by-default (probably depends on the definition of secure-by-default), because you have to authorize the “client” also. Hence there is the necessity to equip the client with an individual “token”, be a client certificate, your custom private CA root, or a PSK. Then it looks like running a private CA is also not secure-by-default, as you somehow need to make the software aware of your custom CA. But maybe am I missing something? Is secure-by-default just, the most secure configuration minus potential user specific adjustments, which can be as small as pointing to a CA certificate which should be (additionally) trusted?

          2. 1

            On the vulnerable dependency issue, specifically around critical open source, the Open Source Security Foundation has a Securing Critical Projects Working Group which is trying to identify “critical” open source projects like Log4J, and then work with those projects to improve their security.