Not sure who told the author of this piece that security by obscurity is bad, but what I have always heard is that security through obscurity is simply not to be relied upon. It’s not that you shouldn’t do it, but you should assume it will be defeated.
So if you want to change your SSH port, fine, but don’t leave password authentication enabled and go thinking you’re safe
There seems to be a consensus that “6!x8GWqufk-EL6tv_A4.E” is a stronger password than “letmein”. The only significant difference I see between these passwords is obscurity.
I wonder if this can be considered an example of “security by obscurity” that is widely considered neither “bad” nor likely to be defeated?
There’s a long history of distinguishing obscure information like passwords or cryptographic keys from obscure methods like encryption algorithms. The key difference, I think, is that the only purpose of the secret information is to be secret, and you can measure its properties in that respect; that’s not true of code that’s meant to be secret, and competing requirements like “needs to run on someone else’s machine” make obscurity an unreliable crutch in many situations.
EDIT: Another key difference is that “obscurity” can be taken as “the information is still present in whatever the adversary can access, it’s just harder to read”, e.g. obfuscated source code in a JavaScript file. That’s also different from a secret like a password, which should be protected by not exposing it at all.
Like most maxims, “Security through obscurity is bad” is an oversimplification, but in my opinion it’s a good rule of thumb to be disregarded only when you know what you’re doing.
I think the “security by obscurity is bad” aphorism is quite a bit narrower than the original meaning: security by algorithmic obscurity is bad because one has to presume that a motivated attacker will be able to identify or acquire the algorithm. Therefore, any additional security from algorithmic obscurity is ephemeral, and sacrifices the very real benefit of allowing the cryptographic community to examine the algorithm for weakness (since weaknesses are often non-obvious, especially to the creator). As such, one could say that it’s a corollary to one of Kerchoff’s principles (rephrased by Shannon as simply [assume that] “the enemy knows the system”).
The aphorism has been adopted by those lacking the technical knowledge to understand the full meaning and generalized further than it should be.
The artificial distinction between “secrecy” (which is necessary to protect the key) and “obscurity” (which is generally used to apply to the system) is most important to understanding the aphorism and unfortunately the distinction appears non-obvious to the layman and leads to confusion.
I think to rectify these definitions you need to have an idea of the system under test. The system expects, takes in, comments on the quality of its inputs and is required, when assumptions are satisfied, to produce trusted output.
Security by obscurity says that the system is more difficult to break if the adversary doesn’t know what it is. This is generally true, it at least adds research costs to the adversary and may even substantially increase the effort required to make an attack.
The general maxim is that security by obscurity should not be relied upon. In other words, you should have confidence that your system is still reliable even in the circumstance where your adversary knows everything about it.
So, ultimately, the quality of the password isn’t really about the system. The system could, for instance, choose to reject bad passwords and improve its quality. The adversary knowing about the system now knows not to test a certain subset of weak passwords (no chance of success) but the system is still defensible.
The difference is not only obscurity; it’s (quantifiable) cryptographic strength.
Your website uses 256-bit AES, because it’s impossible to brute-force without using more energy than is contained in our solar system. You wouldn’t use 64-bit AES, though. Is the difference that the former algorithm’s key is more obscure?
An obscure system will be understood, and therefore cracked if its only advantage was obscurity. Passphrase-protected crypto systems are not obscure. Their operation is laid open for all to see, including what they do with passwords. If you can go from that to cracking specific cryptexts, that’s a flaw everyone will admit. However, if you must skip the system entirely and beat a passphrase out of someone in order to break the cryptext, that’s no flaw of the system under discussion. It might be a flaw of some larger system, but I believe it is universally acknowledged that, if you’re beating a passphrase out of someone and will only stop when you get the information you’re looking for or you kill the person you’re beating, the person will almost certainly give the passphrase before they die.
I agree. One should though make sure it’s cleanly separated (like in the article) and doesn’t increase complexity much and doesn’t thereby accidentally create more attack surface. Also it’s important to keep a clear mind about this over time and not accidentally end up slowly turning the “with” into a “by”.
These things can and do happen as projects grow, owners switch, focuses change, etc.
I kind of agree with the author, but he used a terrible example. This is the kind of example that gives security through obscurity a bad name. Sure, by running sshd on a different port, you make it a little bit harder for a hacker to find you, but what you’ve actually done is disabled all of the standard UNIX network security stuff that ensures only root can open ports below 1024. Now, anyone with any user access to the box can watch that port and if they notice it’s not responding, they can start up an ssh daemon of their own that can do all kinds of nefarious things.
Congratulations! By running sshd on an untrusted port and using “Security through obscurity”, you’ve just made your server less secure.
What might be better would be running it on 22, but using some firewall trickery to protect 22 and just point like, port 42069 to your port 22 sshd. Best of both worlds.
Yep, or run it on port 222, or 1022, or redirect it at your firewall, there are a number of ways to reduce the log spam that is produced by running sshd on port 22. I’m sure there are valid ways of enhancing your security through making it harder for a random person to find your service (though I can’t think of any off the top of my head), but this is not one of them.
It’s not so likely anymore that an attacker finding such a host is still using NMAP, because it’s not the fastest. In this scenario you are still not secure, because the weakest link the password. So it’s really important to stress the part about “defense in depth” part mentioned in the article.
To compare it with the “real life example”. These cars are filled with security, there is security around. This is why an attacker is unlikely to succeed. If there was no guards, etc. there would be a lot less security. So the equivalent of using (just) security by obscurity is having unlocked cars, no security personal, no trained drivers, no secure cars, but X amount of them.
In many cases it is important to think about what you want to protect against. As a basic thought: Targeted or mass attacks. If you worry about the internet’s “background noise” of having a service connected to the internet, yes, changing your password (or using keys) and changing your port will have some effect. But that’s not usually what you want to protect against. For most people the biggest problem on SSH ends up being people having your credentials through whatever mean. So they most likely already know the port or that your are using SSH.
Regarding the other topics:
Obfuscating code in web applications. I think in web applications it oftentimes is less effective than in other cases. In a lot of cases security relevant information can be seen by watching the traffic, rather than looking at the code and even in non-obfuscated code it might be the best place to look at. Another thing to keep in mind is that bugs due to obfuscation do happen, yes, even with automated minimizers.
Still, I really like the idea of thinking about security. There is a couple of other thoughts i had going into a similar direction. For example, do hashed passwords in most basic applications still make a difference, if people would not re-use their passwords, when the goal is to protect user data? Of course this also depends on specifics.
I think the most important takeaway from this article is to think about what your goals are and whether they are covered. That doesn’t mean you should ignore good practices and doesn’t mean that you shouldn’t implement security if you can. But there is a second side which is don’t think just because you built a wall it’s okay, to leave the door open and don’t put months of effort into making the best lock, if you can just walk around it. This is related to looking for the weakest link in a setup. Usually it makes sense to follow them, also from a psychological perspective. It’s way too easy to feel like you invested a lot of time into a particular part of your security and therefor think the whole system must be a lot more secure now, when it was really about one specific attack vector.
On top of that we live in a time where many things (encryption, etc.) are either turned on by default or very easy to do without adding much overhead or complexity. Sometimes you would have to put a lot of effort in to disable them.
Since it was mentioned “Never roll your own crypto” also is an interesting and frequently misunderstood topic. Various obfuscation tactics are similar to own crypto. But what I have seen a couple of times is not adding TLS to a piece of software, because it was considered “rolling your own crypto”. I am not talking about implementing a TLS library and also not about cases where TLS is added through some variation of a separate process (HTTP proxy, etc.). But that’s a separate topic.
What I want to say by that is that is that not just in security we shouldn’t just follow some dogmas. I’ve seen many cases of them being misinterpreted in harmful ways. Also for some (usually not security related) dogmas there is exceptions or it is very important to know what exactly is meant by that and better even why it exists. I don’t think it’s a good sign if you do something “because it’s best practice”, without knowing the why. Of course a lot of them exist to push you into the right direction and you cannot know everything, but if you get into a situation where this becomes relevant, eg. to actually roll your own crypto or just make design decision it’s good to know about it to try to understand the why. So it’s good to have them in the back of your head and nudge you when something comes up and really think, potentially research and discuss the topic with others. Don’t just blindly follow idioms, if you can.
I think the article is a great example of that approach.
I’ve recently been surprised at the low skill level of some attackers. A few years ago, one of the FreeBSD developers’ ssh keys was compromised. Someone logged into the machine that hosts the canonical svn repository. Unlike git, svn does not contain any integrity checks and so they could have tampered with the code and history and it would have been incredibly difficult to detect (we eventually determined that they hadn’t by taking one of the git-svn mirrors and comparing each svn release in that with the version that we got out of svn). That machine had a lot of auditing stuff turned on, including auditdistd, so we had a fairly good idea of what they did: they logged in, tried to run a few things that are in a typical GNU/Linux distro, got command-not-found errors, and gave up. It was almost certainly some automated attack trying to quickly build a bot-net of Linux machines for some nefarious purpose.
The difference between that kind of common attack and what you’d see with a targeted attack is enormous. A skilled targeted attacker would probably not have been able to compromise the audit logs (unless they hadn’t been detected for a while and were able to sneak a back door into the auditdistd code that they now controlled), but they would have been able to spread throughout the network and probably compromise a load of developer machines as well. They’d also have been able to hide in some fairly undetectable ways on any machine that didn’t have auditdistd installed.
20 or so years ago, careful targeted attacks weren’t that different from the kinds of things your machine would be exposed to just be having an IP address (or a floppy drive).
It’s technically correct that obscurity improves the difficulty of an attack, but I think the advice is still good advice. There are a couple reasons why.
The first is that the Swiss Cheese Model is, in my opinion, fundamentally flawed. It treats security barriers as independent, which is unrealistic - in the language of the metaphor, the arrangement of slices tend to be correlated. So adding more cheese doesn’t necessarily make it harder to get through, but you still don’t get that extra cheese for free. I think there are better systems engineering approaches - Engineering A Safer World goes through a lot of the arguments better than I can. I think those arguments are even stronger when we have to engineer for someone intentionally arranging the worst-case scenario.
The second is that, in the context of security-by-obscurity in particular, I think the cost often outweighs the gains. The argument here is that it can be used as an extra layer of defense - it will prevent unskilled attackers from succeeding and may deter skilled attackers who prefer easier targets. Fair enough. But I don’t think it’s a fair generalization to say the cost is low.
In most cases it’s not as simple as changing a port; in the contexts I worry about it most, it’s paying for third-party obfuscation software as part of a build pipeline, or it’s rolling your own algorithms unnecessarily (not just crypto!). The low cost doesn’t hold up there - not only do they come with big price tags in licensing or engineering time, but they increase the costs of debugging and security assessments (and may reduce the quality of both as well). Moreover, those two cases are the ones where the benefits of obfuscation are the weakest; your third party obfuscation library, if it’s popular, is likelier to be targeted for reverse engineering, while your in-house algorithm is likelier to have mistakes that don’t appear in the well-studied public domain stuff.
All that said? I don’t think I disagree with the original article, which is pretty clear that reality is nuanced and the real answer is to think about your use case. But from what I’ve seen in the wild, there’s plenty of obfuscation out there that doesn’t need to be, and I think “Security through obscurity is bad” is good advice in that context. I trust the experts who know what they’re doing to make the right choice; it’s the novices learning the field, the business leaders making decisions based on best practices, and the less enthusiastic programmer who just wants to do their job right and go home for the day that I think need the advice.
Security through obscurity is bad, in that wasting mental space on it is bad.
It’s bad in that it results in non-standard practices that will make your codebase harder to maintain.
Yes, you could use a 6-character password that’s likely to be in the top 100 of any single “common phrases” table and switch your ssh port to 64323 and you’re likely never going to get your machine broken into.
But god forbid that every single new employee will have to remember to switch this default ssh port to 64323 and configure every new instance and it’s firewall rules to adapt to that. And every single application that must connect to a machine through ssh.
Security through obscurity ends up being bad in the same way obscure code ends up being bad. There’s a reason we have standards around security, not because custom solutions couldn’t be better by adding obscurity, but because it avoids an extra day of onboarding and the forever-occupied mental space dedicated to remembering the quicky security practices used by your team.
I agree, but just to point out the downside: it makes the whole system harder to analyze.
If you ever assume that your system is secure because the attacker does not know X which could be derived from the code by deobfuscating it you are probably making a mistake. Also, obfuscation is a break-once attack, afterwards your secret is out. Oh wait, it’s not really secret, just another obstacle. But in the end you just want to make the attack expensive enough for an attacker. What if the deobfuscated code is shared with other attackers? Now it’s free for them! But an attacker won’t share his deobfuscated version of the code because he probably doesn’t want to do free labor.
…the rabbit hole continues
So, yes, you can use obscurity to make an attacker’s life more difficult but you should not rely on it for your system to be secure. Hence, avoid security BY obscurity.
Just build your system to be secure without the obscurity. Then throw it in on top if you want.
There are some odd things in this post, but yeah, if you add a layer of obscurity after your normal defenses it usually doesn’t hurt.
One reason to keep ssh at a port below 1024 I’d only heard recently was that an attacker couldn’t spawn their own backdoored one on the same port if they should’ve managed to shutdown or crash the original one.
The time it takes to brute force should be unfeasible, i.e. the solution should be so obscure that it isn’t worth trying to find the key. What is traditionally meant with this aphorism is that you should not rely on non-quantifiable obscurity, i.e. if you cannot prove that your security is effective then it’s not safe to assume that it is effective.
Wasn’t the expression “security by obscurity alone is bad”, or something? Security by obscurity is literally taught and held up, they just call it defense in depth (like the author notes). It absolutely does “work” for varying effects we can argue over.
I think there is danger in arguing Obscurity Defense X (70% effective) might not get Alyssa P Hacker but Obscurity Defense Y might (also 70% effective). I think if one obscurity defense fails, the attacker can likely fail them all; it’s likely all or nothing, and I wouldn’t assume an effective middleground.
From a purely pragmatic perspective, the metaphorical “different SSH port” may be an additional (ultra thin) valid layer of defense.
From a more idealistic perspective.. I believe that tested security is good security. Exposing everything to the world forces you to be confident in the actual core security mechanisms. As in, for (a kinda silly) example, publishing your encrypted password manager vault file on your website – only relying on the crypto and not at all on the “most people don’t even have the file” thing – would make you more careful with the master password, encourage you to audit the password manager and so on.
Not sure who told the author of this piece that security by obscurity is bad, but what I have always heard is that security through obscurity is simply not to be relied upon. It’s not that you shouldn’t do it, but you should assume it will be defeated.
So if you want to change your SSH port, fine, but don’t leave password authentication enabled and go thinking you’re safe
“Security by obscurity is bad” is the line that is parroted by many who don’t understand.
There seems to be a consensus that “6!x8GWqufk-EL6tv_A4.E” is a stronger password than “letmein”. The only significant difference I see between these passwords is obscurity.
I wonder if this can be considered an example of “security by obscurity” that is widely considered neither “bad” nor likely to be defeated?
There’s a long history of distinguishing obscure information like passwords or cryptographic keys from obscure methods like encryption algorithms. The key difference, I think, is that the only purpose of the secret information is to be secret, and you can measure its properties in that respect; that’s not true of code that’s meant to be secret, and competing requirements like “needs to run on someone else’s machine” make obscurity an unreliable crutch in many situations.
EDIT: Another key difference is that “obscurity” can be taken as “the information is still present in whatever the adversary can access, it’s just harder to read”, e.g. obfuscated source code in a JavaScript file. That’s also different from a secret like a password, which should be protected by not exposing it at all.
Like most maxims, “Security through obscurity is bad” is an oversimplification, but in my opinion it’s a good rule of thumb to be disregarded only when you know what you’re doing.
I think the “security by obscurity is bad” aphorism is quite a bit narrower than the original meaning: security by algorithmic obscurity is bad because one has to presume that a motivated attacker will be able to identify or acquire the algorithm. Therefore, any additional security from algorithmic obscurity is ephemeral, and sacrifices the very real benefit of allowing the cryptographic community to examine the algorithm for weakness (since weaknesses are often non-obvious, especially to the creator). As such, one could say that it’s a corollary to one of Kerchoff’s principles (rephrased by Shannon as simply [assume that] “the enemy knows the system”).
The aphorism has been adopted by those lacking the technical knowledge to understand the full meaning and generalized further than it should be.
The artificial distinction between “secrecy” (which is necessary to protect the key) and “obscurity” (which is generally used to apply to the system) is most important to understanding the aphorism and unfortunately the distinction appears non-obvious to the layman and leads to confusion.
Edit: Ugh, just realized that this is essentially paraphrasing an old Robert Graham blog post: https://blog.erratasec.com/2016/10/cliche-security-through-obscurity-again.html. Also corrected a sentence in which I nonsensically used “security” in place of “obscurity.”
That definition makes sense and clears up something I had been wondering about for a long time. Thanks!
I think to rectify these definitions you need to have an idea of the system under test. The system expects, takes in, comments on the quality of its inputs and is required, when assumptions are satisfied, to produce trusted output.
Security by obscurity says that the system is more difficult to break if the adversary doesn’t know what it is. This is generally true, it at least adds research costs to the adversary and may even substantially increase the effort required to make an attack.
The general maxim is that security by obscurity should not be relied upon. In other words, you should have confidence that your system is still reliable even in the circumstance where your adversary knows everything about it.
So, ultimately, the quality of the password isn’t really about the system. The system could, for instance, choose to reject bad passwords and improve its quality. The adversary knowing about the system now knows not to test a certain subset of weak passwords (no chance of success) but the system is still defensible.
The difference is not only obscurity; it’s (quantifiable) cryptographic strength.
Your website uses 256-bit AES, because it’s impossible to brute-force without using more energy than is contained in our solar system. You wouldn’t use 64-bit AES, though. Is the difference that the former algorithm’s key is more obscure?
An obscure system will be understood, and therefore cracked if its only advantage was obscurity. Passphrase-protected crypto systems are not obscure. Their operation is laid open for all to see, including what they do with passwords. If you can go from that to cracking specific cryptexts, that’s a flaw everyone will admit. However, if you must skip the system entirely and beat a passphrase out of someone in order to break the cryptext, that’s no flaw of the system under discussion. It might be a flaw of some larger system, but I believe it is universally acknowledged that, if you’re beating a passphrase out of someone and will only stop when you get the information you’re looking for or you kill the person you’re beating, the person will almost certainly give the passphrase before they die.
I think I would summarize as “Security with obscurity is a good idea.”
I agree. One should though make sure it’s cleanly separated (like in the article) and doesn’t increase complexity much and doesn’t thereby accidentally create more attack surface. Also it’s important to keep a clear mind about this over time and not accidentally end up slowly turning the “with” into a “by”.
These things can and do happen as projects grow, owners switch, focuses change, etc.
I kind of agree with the author, but he used a terrible example. This is the kind of example that gives security through obscurity a bad name. Sure, by running sshd on a different port, you make it a little bit harder for a hacker to find you, but what you’ve actually done is disabled all of the standard UNIX network security stuff that ensures only root can open ports below 1024. Now, anyone with any user access to the box can watch that port and if they notice it’s not responding, they can start up an ssh daemon of their own that can do all kinds of nefarious things.
Congratulations! By running sshd on an untrusted port and using “Security through obscurity”, you’ve just made your server less secure.
What might be better would be running it on 22, but using some firewall trickery to protect 22 and just point like, port 42069 to your port 22 sshd. Best of both worlds.
Yep, or run it on port 222, or 1022, or redirect it at your firewall, there are a number of ways to reduce the log spam that is produced by running sshd on port 22. I’m sure there are valid ways of enhancing your security through making it harder for a random person to find your service (though I can’t think of any off the top of my head), but this is not one of them.
You can reserve that port for root running sshd via SELinux.
It’s not so likely anymore that an attacker finding such a host is still using NMAP, because it’s not the fastest. In this scenario you are still not secure, because the weakest link the password. So it’s really important to stress the part about “defense in depth” part mentioned in the article.
To compare it with the “real life example”. These cars are filled with security, there is security around. This is why an attacker is unlikely to succeed. If there was no guards, etc. there would be a lot less security. So the equivalent of using (just) security by obscurity is having unlocked cars, no security personal, no trained drivers, no secure cars, but X amount of them.
In many cases it is important to think about what you want to protect against. As a basic thought: Targeted or mass attacks. If you worry about the internet’s “background noise” of having a service connected to the internet, yes, changing your password (or using keys) and changing your port will have some effect. But that’s not usually what you want to protect against. For most people the biggest problem on SSH ends up being people having your credentials through whatever mean. So they most likely already know the port or that your are using SSH.
Regarding the other topics:
Obfuscating code in web applications. I think in web applications it oftentimes is less effective than in other cases. In a lot of cases security relevant information can be seen by watching the traffic, rather than looking at the code and even in non-obfuscated code it might be the best place to look at. Another thing to keep in mind is that bugs due to obfuscation do happen, yes, even with automated minimizers.
Still, I really like the idea of thinking about security. There is a couple of other thoughts i had going into a similar direction. For example, do hashed passwords in most basic applications still make a difference, if people would not re-use their passwords, when the goal is to protect user data? Of course this also depends on specifics.
I think the most important takeaway from this article is to think about what your goals are and whether they are covered. That doesn’t mean you should ignore good practices and doesn’t mean that you shouldn’t implement security if you can. But there is a second side which is don’t think just because you built a wall it’s okay, to leave the door open and don’t put months of effort into making the best lock, if you can just walk around it. This is related to looking for the weakest link in a setup. Usually it makes sense to follow them, also from a psychological perspective. It’s way too easy to feel like you invested a lot of time into a particular part of your security and therefor think the whole system must be a lot more secure now, when it was really about one specific attack vector.
On top of that we live in a time where many things (encryption, etc.) are either turned on by default or very easy to do without adding much overhead or complexity. Sometimes you would have to put a lot of effort in to disable them.
Since it was mentioned “Never roll your own crypto” also is an interesting and frequently misunderstood topic. Various obfuscation tactics are similar to own crypto. But what I have seen a couple of times is not adding TLS to a piece of software, because it was considered “rolling your own crypto”. I am not talking about implementing a TLS library and also not about cases where TLS is added through some variation of a separate process (HTTP proxy, etc.). But that’s a separate topic.
What I want to say by that is that is that not just in security we shouldn’t just follow some dogmas. I’ve seen many cases of them being misinterpreted in harmful ways. Also for some (usually not security related) dogmas there is exceptions or it is very important to know what exactly is meant by that and better even why it exists. I don’t think it’s a good sign if you do something “because it’s best practice”, without knowing the why. Of course a lot of them exist to push you into the right direction and you cannot know everything, but if you get into a situation where this becomes relevant, eg. to actually roll your own crypto or just make design decision it’s good to know about it to try to understand the why. So it’s good to have them in the back of your head and nudge you when something comes up and really think, potentially research and discuss the topic with others. Don’t just blindly follow idioms, if you can.
I think the article is a great example of that approach.
I’ve recently been surprised at the low skill level of some attackers. A few years ago, one of the FreeBSD developers’ ssh keys was compromised. Someone logged into the machine that hosts the canonical svn repository. Unlike git, svn does not contain any integrity checks and so they could have tampered with the code and history and it would have been incredibly difficult to detect (we eventually determined that they hadn’t by taking one of the git-svn mirrors and comparing each svn release in that with the version that we got out of svn). That machine had a lot of auditing stuff turned on, including auditdistd, so we had a fairly good idea of what they did: they logged in, tried to run a few things that are in a typical GNU/Linux distro, got command-not-found errors, and gave up. It was almost certainly some automated attack trying to quickly build a bot-net of Linux machines for some nefarious purpose.
The difference between that kind of common attack and what you’d see with a targeted attack is enormous. A skilled targeted attacker would probably not have been able to compromise the audit logs (unless they hadn’t been detected for a while and were able to sneak a back door into the auditdistd code that they now controlled), but they would have been able to spread throughout the network and probably compromise a load of developer machines as well. They’d also have been able to hide in some fairly undetectable ways on any machine that didn’t have auditdistd installed.
20 or so years ago, careful targeted attacks weren’t that different from the kinds of things your machine would be exposed to just be having an IP address (or a floppy drive).
It’s technically correct that obscurity improves the difficulty of an attack, but I think the advice is still good advice. There are a couple reasons why.
The first is that the Swiss Cheese Model is, in my opinion, fundamentally flawed. It treats security barriers as independent, which is unrealistic - in the language of the metaphor, the arrangement of slices tend to be correlated. So adding more cheese doesn’t necessarily make it harder to get through, but you still don’t get that extra cheese for free. I think there are better systems engineering approaches - Engineering A Safer World goes through a lot of the arguments better than I can. I think those arguments are even stronger when we have to engineer for someone intentionally arranging the worst-case scenario.
The second is that, in the context of security-by-obscurity in particular, I think the cost often outweighs the gains. The argument here is that it can be used as an extra layer of defense - it will prevent unskilled attackers from succeeding and may deter skilled attackers who prefer easier targets. Fair enough. But I don’t think it’s a fair generalization to say the cost is low.
In most cases it’s not as simple as changing a port; in the contexts I worry about it most, it’s paying for third-party obfuscation software as part of a build pipeline, or it’s rolling your own algorithms unnecessarily (not just crypto!). The low cost doesn’t hold up there - not only do they come with big price tags in licensing or engineering time, but they increase the costs of debugging and security assessments (and may reduce the quality of both as well). Moreover, those two cases are the ones where the benefits of obfuscation are the weakest; your third party obfuscation library, if it’s popular, is likelier to be targeted for reverse engineering, while your in-house algorithm is likelier to have mistakes that don’t appear in the well-studied public domain stuff.
All that said? I don’t think I disagree with the original article, which is pretty clear that reality is nuanced and the real answer is to think about your use case. But from what I’ve seen in the wild, there’s plenty of obfuscation out there that doesn’t need to be, and I think “Security through obscurity is bad” is good advice in that context. I trust the experts who know what they’re doing to make the right choice; it’s the novices learning the field, the business leaders making decisions based on best practices, and the less enthusiastic programmer who just wants to do their job right and go home for the day that I think need the advice.
Security through obscurity is bad, in that wasting mental space on it is bad. It’s bad in that it results in non-standard practices that will make your codebase harder to maintain.
Yes, you could use a 6-character password that’s likely to be in the top 100 of any single “common phrases” table and switch your ssh port to 64323 and you’re likely never going to get your machine broken into.
But god forbid that every single new employee will have to remember to switch this default ssh port to
64323
and configure every new instance and it’s firewall rules to adapt to that. And every single application that must connect to a machine through ssh.Security through obscurity ends up being bad in the same way obscure code ends up being bad. There’s a reason we have standards around security, not because custom solutions couldn’t be better by adding obscurity, but because it avoids an extra day of onboarding and the forever-occupied mental space dedicated to remembering the quicky security practices used by your team.
I agree, but just to point out the downside: it makes the whole system harder to analyze.
If you ever assume that your system is secure because the attacker does not know X which could be derived from the code by deobfuscating it you are probably making a mistake. Also, obfuscation is a break-once attack, afterwards your secret is out. Oh wait, it’s not really secret, just another obstacle. But in the end you just want to make the attack expensive enough for an attacker. What if the deobfuscated code is shared with other attackers? Now it’s free for them! But an attacker won’t share his deobfuscated version of the code because he probably doesn’t want to do free labor. …the rabbit hole continues
So, yes, you can use obscurity to make an attacker’s life more difficult but you should not rely on it for your system to be secure. Hence, avoid security BY obscurity.
Just build your system to be secure without the obscurity. Then throw it in on top if you want.
There are some odd things in this post, but yeah, if you add a layer of obscurity after your normal defenses it usually doesn’t hurt.
One reason to keep ssh at a port below 1024 I’d only heard recently was that an attacker couldn’t spawn their own backdoored one on the same port if they should’ve managed to shutdown or crash the original one.
All security is by obscurity…
The time it takes to brute force should be unfeasible, i.e. the solution should be so obscure that it isn’t worth trying to find the key. What is traditionally meant with this aphorism is that you should not rely on non-quantifiable obscurity, i.e. if you cannot prove that your security is effective then it’s not safe to assume that it is effective.
Or by active countermeasures (very uncommon in computer security, but entirely normal IRL - see, for instance, armed guards).
It’s not that security by obscurity is bad. It’s rather that you can’t use it to obtain proofs or formalisms of security.
Wasn’t the expression “security by obscurity alone is bad”, or something? Security by obscurity is literally taught and held up, they just call it defense in depth (like the author notes). It absolutely does “work” for varying effects we can argue over.
I think there is danger in arguing Obscurity Defense X (70% effective) might not get Alyssa P Hacker but Obscurity Defense Y might (also 70% effective). I think if one obscurity defense fails, the attacker can likely fail them all; it’s likely all or nothing, and I wouldn’t assume an effective middleground.
From a purely pragmatic perspective, the metaphorical “different SSH port” may be an additional (ultra thin) valid layer of defense.
From a more idealistic perspective.. I believe that tested security is good security. Exposing everything to the world forces you to be confident in the actual core security mechanisms. As in, for (a kinda silly) example, publishing your encrypted password manager vault file on your website – only relying on the crypto and not at all on the “most people don’t even have the file” thing – would make you more careful with the master password, encourage you to audit the password manager and so on.