One of the things I liked about Orange Book and Common Criteria ratings were that you could represent this with the higher ratings. Each one required fewer defects, more mitigations, better recovery, and so on. If not bulletproof, you were at least assured you would be cleaning up fewer messes that should be smaller most of the time.
Then again, I argued for this exact dichotomy in next variant of security certification: whole system was “Insecure and Unevaluated” until each component was analyzed and rated. Overall system rating is that of weakest component in TCB. The attackers go for the low hanging fruit. They throw large numbers of people over time at stuff that’s harder to get. They consistently get exploits on the popular tech. A kit with those gets them read or write access to the system. Preventing that is the main point of system-level security. So, calling it insecure if it can’t block that is reasonable.
So, let’s have a quick look at your claims in terms of that:
“privilege separation (such as sandboxing), which either reduces the assets available to an attacker or forces them to chain additional vulnerabilities together to achieve their goals”
Bugs are put into mainstream software so fast that “chain additional vulnerabilities” is about meaningless except in the few cases where that’s actually hard. In one example, I read about a high school hacker chaining 5-6 bugs together in Chrome to create a vulnerability. Didn’t take long either. CompSci people regularly throw their tools at code always finding more bugs. More mainstream version of that is fuzzers. In high-assurance security, you had to take measures to ensure each root cause you cared about was provably impossible to happen. Quick examples are full validation of input, safety checks on anything that can overflow, checking for temporal errors, and source-to-object validation if worried about compiler screwing stuff up.
“asset depreciation (such as hashing passwords), which reduces the value of the assets an attacker can access”
This is an example of effective security. Specifically, it’s data security that makes the data itself less valuable when stored on an insecure system. The security dependency is whatever is putting the data in that form. It can be subverted to either disable that protection or make it look like the protection is there but actually isn’t. Alternatively, the attacker can hope to get something out of what they stole in smaller numbers. In practice, sophistication and risk combo is high enough that they do the subversion rarely to never. They almost always go for the lower-value data itself. And this working assumes defender is doing it correctly rather than rolling their own protection, using an obsolete version, or other screwup in crypto side.
“exploit mitigations (such as Control Flow Integrity or Content Security Policies), which make exploiting vulnerabilities harder, or impossible”
Which themselves get bypassed a lot. The actual result is your system is insecure with a smaller number of attackers or in a different time window. This can have value. It’s still insecure where you have to assume that data is going bye bye.
“detection and incident response, which allow identifying successful attackers, limiting the window of compromise”
The system is still insecure in this case. Detection and response are a separate topic in security with their own methods, cat and mouse game, people issues, and so on. Definitely important. It doesn’t change fact that their insecure system got compromised with data possibly working its way outward or attacks inward until they notice the breach. Then, they have to do something about it without disrupting operations while the attack is ongoing. In many cases, they’d have been better off with a system that was secure to begin with or closest one can get to it.
So, none of these counter the System is Insecure mindset if you’re using systems highly likely to have exploitable vulnerabilities (most). Two can mitigate some or all of the damage in some contexts. They should definitely be done. They should also be prominently noted in any description of overall, security posture to highlight the benefits. You’re still building on insecure systems instead of highly-secure alternatives. Presumably, you get benefits out of that justifying it. It doesn’t change what’s happening, though.
Side note: studying this stuff so long has made me want to do away with the word “secure” entirely. The impression is total protection. The reality is always highly, context dependent. Taxonomies of threats with a solution’s responses and their strength make more sense. I’ve even seen it made easy enough for lay people to follow.
studying this stuff so long has made me want to do away with the word “secure” entirely. The impression is total protection
From an actual software development standpoint, security engineering is sort of a Maslow’s (Phrack’s?) Hierarchy of Needs for me. It’s all about building more moats in networked applications these days. My list looks something like this:
First, make sure that your network communication is secure. If your users are sending their credentials over anything insecure, or even TLS without certificate path validation, why bother? This is basic sysadmin stuff.
Next, ensure your application’s API footprint (yes, the whole thing, web server and all) doesn’t misuse user input. This is super basic OWASP Top 10 stuff and probably where most people screw up. Adopt development processes where your developers RTFM.
Do a threat model for at least one part of your application. Seriously. They’re not that hard.
(interesting stuff is below this line)
Next, assuming you have Apache indexes disabled, you aren’t SQL injecting yourself, and you’re at least hashing passwords, make sure you’re using the right crypto primitives correctly. This requires some knowledge of crypto.
Look into audit logging to ensure that important user actions are being logged. This requires knowledge of your user or customer.
Bake more paranoia into your application. Depending on what you’re making, this may be anti-cheat or attestation.
… etc
The basic stuff is what everyone screws up. If your development team isn’t competent enough to get most of the basics under control (or even understand what the relevant problems are), they won’t be able to even reason about the more interesting stuff.
None of these tactics remove or prevent vulnerabilities, and would therefore by rejected by a “defense’s job is to make sure there are no vulnerabilities for the attackers to find” approach. However, these are all incredibly valuable activities for security teams, and lower the expected value of trying to attack a system.
I’m not convinced. “Perfect” is an overstatement for the sake of simplicity, but effective security measures need to be exponentially more costly to bypass than they are to implement, because attackers have much greater resources than defenders. IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers. Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.
You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system? In this case and as your threats get more advanced I agree with the article. In higher levels of threats it becomes a problem of economics.
You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system?
I think just the opposite actually. The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals. Whereas following the truism would lead you to make changes that would protect against all attackers.
The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals.
That’s not what I understand from this article. The attitude proposed by the article should, IMO, lead you to think of the threat model of the system you’re trying to protect.
If it’s your friend’s blog, you (probably) shouldn’t have to consider state actors. If it’s a stock exchange, you should. If you’re Facebook or Amazon, not the same as lobsters or your sister’s bike repair shop. If you’re a politically exposed individual, exploiting your home automation raspberry pi might be worth more than exploiting the same system belonging to someone who is not a public figure at all.
Besides that, I disagree that all examples are too costly to be worth it. Hashing passwords is always worth it, or at least I can’t think of a case where it wouldn’t be.
To summarize with an analogy, I don’t take the exact same care of my bag when my laptop (or other valuables) are in it than when it only contains my water bottle, and Edward Snowden should care more about the software he uses than the ones I use.
Overall I really like the way of thinking presented by the author!
Whereas following the truism would lead you to make changes that would protect against all attackers.
Or mess with your sense of priority such that all vulnerabilities are equally important so “let’s just go for the easier mitigations”, rather than evaluating based on the cost of the attack itself.
It’s important to acknowledge that it’s somewhat counterintuitive to think about the actual parties attempting to crack your defenses. It requires more mental work, in a world where people assume they can get all the info they need just by reading their codebase & judging it on its own merits. It requires methodical, needs-based analysis.
The present mentality is not a pernicious truism; it’s an attractive fallacy.
IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers.
How do you figure it’s too costly? If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks. Additionally, there are services out there that scan dep. vulnerabilities if you give them a Gemfile, or access to your repo.
Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.
Perfect it all you want. The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up) If anything, what’s costly is keeping on your employees to not take shortcuts, and on stay alert to missing access cards, rouge network devices in the office, badge surfing, and that they don’t leave their assets lying around.
If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks.
I’d frame that as: deployment environments are increasingly set up so that everyone pays the costs.
The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up)
So fix that, with a real security measure like hardware tokens. Thinking in terms of costs to attackers doesn’t make that any easier; indeed it would make you more likely to ignore this kind of attack on the grounds that fake domains are costly.
From my experience there is two kind of groups. One caring only about potential for mitigations and the “tactics” laid out in the article and the other only focusing on the existence of vulnerabilities.
With “existence of vulnerability” I don’t mean any statistics about how many vulnerabilities have been found, but limiting factors, like reducing code size, proofs, etc.
On both sides there seem to be standard arguments about the other, which often hold true, at least in the core. So for example that even with very good mitigation, you should still “fix your bugs”.
I really enjoyed the comparison with economy. There are a lot of parallels. There is trade-offs, but also the part that one should not forget things are tools and not religions or ideologies, to not make extreme arguments, that might even contain a lot of truth, but completely disregard the reality we live in or the reason why security engineering/economy exists in first place.
To not get too political, on the security side you can go to the extreme of security by not having a service at all or slightly less extreme, no network connection or a service not started. The most secure bank is probably one without network connection or assets - or a bank that isn’t a bank. However, the reason people care about banking security is because they wanna have it in first place. It’s not a complete self purpose. This is why I see it matching with (financial) economy.
In a way this is what for example makes OpenBSD so interesting. It’s secure, but it’s also a general purpose operating system existing in real life, running your favorite browser, which is in stark contrast to various highly secure operating systems that never manage to find their way out of academia.
One of the things I liked about Orange Book and Common Criteria ratings were that you could represent this with the higher ratings. Each one required fewer defects, more mitigations, better recovery, and so on. If not bulletproof, you were at least assured you would be cleaning up fewer messes that should be smaller most of the time.
Then again, I argued for this exact dichotomy in next variant of security certification: whole system was “Insecure and Unevaluated” until each component was analyzed and rated. Overall system rating is that of weakest component in TCB. The attackers go for the low hanging fruit. They throw large numbers of people over time at stuff that’s harder to get. They consistently get exploits on the popular tech. A kit with those gets them read or write access to the system. Preventing that is the main point of system-level security. So, calling it insecure if it can’t block that is reasonable.
So, let’s have a quick look at your claims in terms of that:
“privilege separation (such as sandboxing), which either reduces the assets available to an attacker or forces them to chain additional vulnerabilities together to achieve their goals”
Bugs are put into mainstream software so fast that “chain additional vulnerabilities” is about meaningless except in the few cases where that’s actually hard. In one example, I read about a high school hacker chaining 5-6 bugs together in Chrome to create a vulnerability. Didn’t take long either. CompSci people regularly throw their tools at code always finding more bugs. More mainstream version of that is fuzzers. In high-assurance security, you had to take measures to ensure each root cause you cared about was provably impossible to happen. Quick examples are full validation of input, safety checks on anything that can overflow, checking for temporal errors, and source-to-object validation if worried about compiler screwing stuff up.
“asset depreciation (such as hashing passwords), which reduces the value of the assets an attacker can access”
This is an example of effective security. Specifically, it’s data security that makes the data itself less valuable when stored on an insecure system. The security dependency is whatever is putting the data in that form. It can be subverted to either disable that protection or make it look like the protection is there but actually isn’t. Alternatively, the attacker can hope to get something out of what they stole in smaller numbers. In practice, sophistication and risk combo is high enough that they do the subversion rarely to never. They almost always go for the lower-value data itself. And this working assumes defender is doing it correctly rather than rolling their own protection, using an obsolete version, or other screwup in crypto side.
“exploit mitigations (such as Control Flow Integrity or Content Security Policies), which make exploiting vulnerabilities harder, or impossible”
Which themselves get bypassed a lot. The actual result is your system is insecure with a smaller number of attackers or in a different time window. This can have value. It’s still insecure where you have to assume that data is going bye bye.
“detection and incident response, which allow identifying successful attackers, limiting the window of compromise”
The system is still insecure in this case. Detection and response are a separate topic in security with their own methods, cat and mouse game, people issues, and so on. Definitely important. It doesn’t change fact that their insecure system got compromised with data possibly working its way outward or attacks inward until they notice the breach. Then, they have to do something about it without disrupting operations while the attack is ongoing. In many cases, they’d have been better off with a system that was secure to begin with or closest one can get to it.
So, none of these counter the System is Insecure mindset if you’re using systems highly likely to have exploitable vulnerabilities (most). Two can mitigate some or all of the damage in some contexts. They should definitely be done. They should also be prominently noted in any description of overall, security posture to highlight the benefits. You’re still building on insecure systems instead of highly-secure alternatives. Presumably, you get benefits out of that justifying it. It doesn’t change what’s happening, though.
Side note: studying this stuff so long has made me want to do away with the word “secure” entirely. The impression is total protection. The reality is always highly, context dependent. Taxonomies of threats with a solution’s responses and their strength make more sense. I’ve even seen it made easy enough for lay people to follow.
From an actual software development standpoint, security engineering is sort of a Maslow’s (Phrack’s?) Hierarchy of Needs for me. It’s all about building more moats in networked applications these days. My list looks something like this:
(interesting stuff is below this line)
The basic stuff is what everyone screws up. If your development team isn’t competent enough to get most of the basics under control (or even understand what the relevant problems are), they won’t be able to even reason about the more interesting stuff.
I’m not convinced. “Perfect” is an overstatement for the sake of simplicity, but effective security measures need to be exponentially more costly to bypass than they are to implement, because attackers have much greater resources than defenders. IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers. Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.
You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system? In this case and as your threats get more advanced I agree with the article. In higher levels of threats it becomes a problem of economics.
I think just the opposite actually. The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals. Whereas following the truism would lead you to make changes that would protect against all attackers.
That’s not what I understand from this article. The attitude proposed by the article should, IMO, lead you to think of the threat model of the system you’re trying to protect.
If it’s your friend’s blog, you (probably) shouldn’t have to consider state actors. If it’s a stock exchange, you should. If you’re Facebook or Amazon, not the same as lobsters or your sister’s bike repair shop. If you’re a politically exposed individual, exploiting your home automation raspberry pi might be worth more than exploiting the same system belonging to someone who is not a public figure at all.
Besides that, I disagree that all examples are too costly to be worth it. Hashing passwords is always worth it, or at least I can’t think of a case where it wouldn’t be.
To summarize with an analogy, I don’t take the exact same care of my bag when my laptop (or other valuables) are in it than when it only contains my water bottle, and Edward Snowden should care more about the software he uses than the ones I use.
Overall I really like the way of thinking presented by the author!
Or mess with your sense of priority such that all vulnerabilities are equally important so “let’s just go for the easier mitigations”, rather than evaluating based on the cost of the attack itself.
If you’re thinking about “mitigations” you’re already in the wrong mentality, the one the truism exists to protect you against.
It’s important to acknowledge that it’s somewhat counterintuitive to think about the actual parties attempting to crack your defenses. It requires more mental work, in a world where people assume they can get all the info they need just by reading their codebase & judging it on its own merits. It requires methodical, needs-based analysis.
The present mentality is not a pernicious truism; it’s an attractive fallacy.
How do you figure it’s too costly? If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks. Additionally, there are services out there that scan dep. vulnerabilities if you give them a Gemfile, or access to your repo.
Perfect it all you want. The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up) If anything, what’s costly is keeping on your employees to not take shortcuts, and on stay alert to missing access cards, rouge network devices in the office, badge surfing, and that they don’t leave their assets lying around.
I’d frame that as: deployment environments are increasingly set up so that everyone pays the costs.
So fix that, with a real security measure like hardware tokens. Thinking in terms of costs to attackers doesn’t make that any easier; indeed it would make you more likely to ignore this kind of attack on the grounds that fake domains are costly.
From my experience there is two kind of groups. One caring only about potential for mitigations and the “tactics” laid out in the article and the other only focusing on the existence of vulnerabilities.
With “existence of vulnerability” I don’t mean any statistics about how many vulnerabilities have been found, but limiting factors, like reducing code size, proofs, etc.
On both sides there seem to be standard arguments about the other, which often hold true, at least in the core. So for example that even with very good mitigation, you should still “fix your bugs”.
I really enjoyed the comparison with economy. There are a lot of parallels. There is trade-offs, but also the part that one should not forget things are tools and not religions or ideologies, to not make extreme arguments, that might even contain a lot of truth, but completely disregard the reality we live in or the reason why security engineering/economy exists in first place.
To not get too political, on the security side you can go to the extreme of security by not having a service at all or slightly less extreme, no network connection or a service not started. The most secure bank is probably one without network connection or assets - or a bank that isn’t a bank. However, the reason people care about banking security is because they wanna have it in first place. It’s not a complete self purpose. This is why I see it matching with (financial) economy.
In a way this is what for example makes OpenBSD so interesting. It’s secure, but it’s also a general purpose operating system existing in real life, running your favorite browser, which is in stark contrast to various highly secure operating systems that never manage to find their way out of academia.
Alex is one of the most billiant people in the IT industry and he speaks the truth.