The guide suggests using “a temporary SSH key” to allow admin access into production systems, whereas Ops engineers and SREs often prefer immutable infrastructure specifically to disallow the use of SSH, which helps with reliability (and cuts off a tempting mechanism for attackers).
“Emergency access provisioning” for SSH keys is actually a good idea for debugging production-only bugs in immutable infrastructure though, especially if the tooling flags the machine as “tainted, needs reinstall” and encourages you to take it off the load balancer before it adds your key.
Yes, this is a juicy target for hackers and “insider risk” infiltrators. Just add good logging…
As another example, the document cautions against “allowing an adversary to use backdoors within the remote environment to access and modify source code within an otherwise protected organization infrastructure.” This dismisses the last 30+ years of source code management (SCM) systems.
This is directly referencing the SolarWinds fiasco, where attackers compromised the software building machine (singular!) and had it swapping out code before every build.
Of course, this is easily solved with… immutable infrastructure, with no shell access, as the article recommends.
Bravo, you have more patience than me, thank you for stating the case for sanity. These advisories are hard to take seriously considering the incentives… all I see is another generation of people who earned trust and used it to slow down progress.
A lot of the original document (“Securing the Software Supply Chain – Recommended Practices Guide for Developers.”) is pure hand-waving, and seems authored by people with little to no real-world enterprise software development experience. My head hurt skimming over it! :-/
I believe real benefits of a security program come from moderation, as opposed to any absolutist positions. Taking paranoid positions only serve as a FUD, without providing any meaningful insight into the security of things that developers’ are looking for. Needless to say, paranoid security people are not very liked by the engineering teams! :-)
There is one thing the author dismisses where I think he was wrong.
The suggestion seems to be to have dedicated VMs with development environments on them, that are local to your laptop. Your laptop should have internet access, obviously, so you can do your job, but maybe the environment running your code or writing your code doesn’t need to be connected to the internet.
It’s an interesting suggestion that could work in your environment. It could separate your windows environments from your linux deploy environments, and network segregation there would require you to be able to test each part in isolation from a wider environment. It could be good.
“Emergency access provisioning” for SSH keys is actually a good idea for debugging production-only bugs in immutable infrastructure though, especially if the tooling flags the machine as “tainted, needs reinstall” and encourages you to take it off the load balancer before it adds your key.
Yes, this is a juicy target for hackers and “insider risk” infiltrators. Just add good logging…
Edit: I see the article actually mentions this later with https://segment.com/blog/access-service/ .
This is directly referencing the SolarWinds fiasco, where attackers compromised the software building machine (singular!) and had it swapping out code before every build.
Of course, this is easily solved with… immutable infrastructure, with no shell access, as the article recommends.
Bravo, you have more patience than me, thank you for stating the case for sanity. These advisories are hard to take seriously considering the incentives… all I see is another generation of people who earned trust and used it to slow down progress.
A lot of the original document (“Securing the Software Supply Chain – Recommended Practices Guide for Developers.”) is pure hand-waving, and seems authored by people with little to no real-world enterprise software development experience. My head hurt skimming over it! :-/
I believe real benefits of a security program come from moderation, as opposed to any absolutist positions. Taking paranoid positions only serve as a FUD, without providing any meaningful insight into the security of things that developers’ are looking for. Needless to say, paranoid security people are not very liked by the engineering teams! :-)
Love the piece.
There is one thing the author dismisses where I think he was wrong.
The suggestion seems to be to have dedicated VMs with development environments on them, that are local to your laptop. Your laptop should have internet access, obviously, so you can do your job, but maybe the environment running your code or writing your code doesn’t need to be connected to the internet.
It’s an interesting suggestion that could work in your environment. It could separate your windows environments from your linux deploy environments, and network segregation there would require you to be able to test each part in isolation from a wider environment. It could be good.
Could.