There’s a lot of “should” in this article and very little argumentation. This distinction also seems arbitrary:
Of course you cannot do that with everything, you cannot read all the source code for the kernel of the operating system you’re running, you cannot read all the code that makes up the compiler or interpreter you’re using, but that is not the point at all, of course some level of trust is always required. The point is that you need to do it when you’re dealing with code you are writing and importing!
This is impractical for almost all but the most trivial projects: anyone fancy auditing OpenSSL for me?
Tools like Snyk or Dependabot (off the top of my head) are necessarily imperfect but can take a lot of the burden away. If the author has objections to this approach then I would suggest they explain why, and/or push for better tooling.
In particular, Solarwinds is a bad example to choose as the backdoored code was delivered as a closed, signed update from a trusted vendor. That’s not code you are “writing or importing”: surely it falls under the (slightly vague) area of things you should be able to trust, according to the author?
Even from the other side. Hackers broke into their network and changed the code as far as we know directly, not by providing some dependency the devs didn’t read.
This domain should be banned. There’s so much flamebaity titles (and content) coming from that author I’m really sick of it. Significantly worse than ddevault.
I believe this is definitely the correct way to go about securing our devices. There is just too much code to audit, and to make matters worse, people routinely run code on their machines that is actively malicious (trying to spy on their users). If you consider that every line of code can contain a bug that may be abused, we’re in a world of hurt if we keep doing what we’re doing.
If you think about it, this is just a continuation of what we’ve been doing to secure things - remember back in the day when Windows ran everything at the same privilege level and any stupid application could cause a BSOD? Us Linux and Unix users were pretty smug about the better security it offered. But we’ve been mostly stagnant while Windows caught up, and Mac in one fell swoop caught up as well with the switch to Darwin/OS X.
At least the OpenBSD folks are still trying to push the envelope, and their pledge system comes somewhat close to capabilities. Unfortunately, it requires one to trust the software itself, whereas hard capabilities don’t require such trust. In this day and age the model where you can trust the software itself is not really sustainable anymore.
This is the less common security talk: “You are screwed. In theory, you could not be screwed. In practice, you are screwed.”
There is a space for a branded software review federation. On the socialist end, a major government could audit and put its seal on certain releases of software. The amount of effort is great, the odds of finding security flaws low, and the likelihood of bad actors high.
This is the same set of challenges that face Wikipedia on a daily basis. There, as here, the solution is to keep all the opposing points of view and allow radically different levels of scrutiny on different topics.
There’s a lot of “should” in this article and very little argumentation. This distinction also seems arbitrary:
This is impractical for almost all but the most trivial projects: anyone fancy auditing OpenSSL for me?
Tools like Snyk or Dependabot (off the top of my head) are necessarily imperfect but can take a lot of the burden away. If the author has objections to this approach then I would suggest they explain why, and/or push for better tooling.
In particular, Solarwinds is a bad example to choose as the backdoored code was delivered as a closed, signed update from a trusted vendor. That’s not code you are “writing or importing”: surely it falls under the (slightly vague) area of things you should be able to trust, according to the author?
Even from the other side. Hackers broke into their network and changed the code as far as we know directly, not by providing some dependency the devs didn’t read.
This domain should be banned. There’s so much flamebaity titles (and content) coming from that author I’m really sick of it. Significantly worse than ddevault.
I beg to differ - this author has valid points (if sometimes not so well-reasoned) and is not as rude as ddevault used to be.
You can, using capability security, limit your vulnerability to code without reading it.
https://github.com/dckc/awesome-ocap https://en.m.wikipedia.org/wiki/Capability-based_security
I believe this is definitely the correct way to go about securing our devices. There is just too much code to audit, and to make matters worse, people routinely run code on their machines that is actively malicious (trying to spy on their users). If you consider that every line of code can contain a bug that may be abused, we’re in a world of hurt if we keep doing what we’re doing.
If you think about it, this is just a continuation of what we’ve been doing to secure things - remember back in the day when Windows ran everything at the same privilege level and any stupid application could cause a BSOD? Us Linux and Unix users were pretty smug about the better security it offered. But we’ve been mostly stagnant while Windows caught up, and Mac in one fell swoop caught up as well with the switch to Darwin/OS X.
At least the OpenBSD folks are still trying to push the envelope, and their pledge system comes somewhat close to capabilities. Unfortunately, it requires one to trust the software itself, whereas hard capabilities don’t require such trust. In this day and age the model where you can trust the software itself is not really sustainable anymore.
The various under-handed
<language>
contests really threaten the conclusions about what you can trust here.You’re still better off reviewing the code than not reviewing it.
This is the less common security talk: “You are screwed. In theory, you could not be screwed. In practice, you are screwed.”
There is a space for a branded software review federation. On the socialist end, a major government could audit and put its seal on certain releases of software. The amount of effort is great, the odds of finding security flaws low, and the likelihood of bad actors high.
This is the same set of challenges that face Wikipedia on a daily basis. There, as here, the solution is to keep all the opposing points of view and allow radically different levels of scrutiny on different topics.