Cryptocurrencies come with the perfect building block for airgaps: you can sign a transaction on one device, and then broadcast it from another. That’s how hardware wallets work.
This might be a sign government based regulation is needed…
The bit about isolation of infrastructure is huge and it’s why I’m a really strong advocate of microservices. You really, really need a process boundary for isolation. Monoliths have the same process minting cookies, handling arbitrary web requests, providing admin interfaces, managing credentials, etc. This is so bad imo. Individually authenticated processes that are separated by even just a container boundary will be so much safer. There’s a complexity cost and I think we still lack ideal tooling here but I do think it’s really the way forward. Using something like AWS Fargate with isolated services that have their own permissions is going to be radically safer than the same service all in one address space and, imo, the complexity isn’t that high (unlike, say, k8s).
For what profile of threat actors does the additional layer of process isolation add significantly more deterrence capability?
It seems like for a threat actor with as much sophistication as in the linked article, this would not apply. Do you have another profile of potential threat actors in mind?
I can kinda see where youre coming from but it’s important to remember:
Security isn’t all or nothing.
Defence-in-depth is key
A threat actor with this much sophistication can supposedly get past anything, right? So I may as well expose all my windows workstations to the internet and leave RDP open. Certificates? Dont need those, these guys will just MITM/backdoor lets encrypt. May as well not use secure passwords because they have the GPUs to crack those too :^) /s
For what profile of threat actors does the additional layer of process isolation add significantly more deterrence capability?
All of them, I guess is the simplest answer. If you want containers as a boundary you need processes. If you want VMs you need processes. Processes are your first step to isolation. This absolutely applies to the level of sophistication - it is extremely hard to break out of a Firecracker VM and the smaller the components in each VM are the less access you gain when you manage to hop between them.
I’d agree with the above comment that a process under the same os is not going to provide sufficient deterrence, especially if it’s the same user which in many cases it is, especially when many default cloud vms allow you to ‘sudo su’, especially when there is a shell with a half-dozen pre-installed interpreters, hundreds of applications, thousands of libraries, etc.
Once you have local access it’s almost game over.
However with this next comment I also agree with. If you wrap each service in a small vm that can only run one and only one program than allow your cloud provider to ensure security at the vm layer you now have a model that is much more resilient. The good news is that this is trivial to do with unikernels now and you don’t need to rely on highly insecure containers (eg: fargate) to perform your scheduling as the clouds will happily take care of the orchestration piece for you.
My point is that if you want isolation you need separate processes. Everything starts with that. “Level 1” is going to be separate processes with their own env vars/ credentials so that an injection in one service (ie: sql injection, or confused deputy type issues - not full RCE) doesn’t get you full access to everything, you can get another big bump by wrapping in containers, another huge bump wrapping in Firecracker, etc.
highly insecure containers (eg: fargate)
Fargate is not highly insecure, nor is it a container. It’s built with a container within a Firecracker virtual machine. Firecracker is a highly hardened VM.
I think recommending unikernels is kind of silly at this point tbh. No one runs unikernels, Firecracker/ Containers are far more achievable and provide a massive improvement over everything being in one memory space.
You get zero bump by wrapping in a container. Containers do not contain. In fact because of this false belief you probably get a negative bump. K8S makes this situation worse by extending the traditional vm boundary out across many instances.
Firecracker provisions vms - it is not a vm itself. Also, unikernels and firecracker are not competitive. Plenty of people run unikernels inside of firecracker.
You get zero bump by wrapping in a container. Containers do not contain. In fact because of this false belief you probably get a negative bump. K8S makes this situation worse by extending the traditional vm boundary out across many instances.
Totally false. Containers absolutely contain and this myth that they don’t is just so silly. Namespaces contain. DAC contains. Seccomp contains. Containers are built on top of security boundaries.
Firecracker provisions vms - it is not a vm itself
Your linked article states “This blog post covers attacking a vulnerability in Firecracker, an open source micro-virtual machine (microVM) monitor “ which is correct - it’s a vmm - not a vm.
This is a silly distinction in this context. It manages the memory of the underlying guest, it provides device management (huge attack surface), etc. It uses KVM to offload the virtual machine emulation/ execution, yes, who cares? This changes nothing about my statements and it should be clear that Firecracker is certainly not a “highly insecure container”.
Since this is flagged as “off-topic”, I’ll just quote directly from the article:
Infrastructure Segmentation: Critical operations like transaction signing require both physical and logical separation from day-to-day business operations. This isolation ensures that a breach of corporate systems cannot directly impact signing infrastructure. Critical operations should use dedicated hardware, separate networks, and strictly controlled access protocols.
A major recommendation here is for isolation exactly for the purposes I have described - isolation of critical “mints a cookie” service from “handles an API call” service.
Or better yet, don’t even use cryptocurrency and make yourself a Target!
Cryptocurrencies come with the perfect building block for airgaps: you can sign a transaction on one device, and then broadcast it from another. That’s how hardware wallets work.
This might be a sign government based regulation is needed…
The bit about isolation of infrastructure is huge and it’s why I’m a really strong advocate of microservices. You really, really need a process boundary for isolation. Monoliths have the same process minting cookies, handling arbitrary web requests, providing admin interfaces, managing credentials, etc. This is so bad imo. Individually authenticated processes that are separated by even just a container boundary will be so much safer. There’s a complexity cost and I think we still lack ideal tooling here but I do think it’s really the way forward. Using something like AWS Fargate with isolated services that have their own permissions is going to be radically safer than the same service all in one address space and, imo, the complexity isn’t that high (unlike, say, k8s).
For what profile of threat actors does the additional layer of process isolation add significantly more deterrence capability?
It seems like for a threat actor with as much sophistication as in the linked article, this would not apply. Do you have another profile of potential threat actors in mind?
I can kinda see where youre coming from but it’s important to remember:
A threat actor with this much sophistication can supposedly get past anything, right? So I may as well expose all my windows workstations to the internet and leave RDP open. Certificates? Dont need those, these guys will just MITM/backdoor lets encrypt. May as well not use secure passwords because they have the GPUs to crack those too :^) /s
All of them, I guess is the simplest answer. If you want containers as a boundary you need processes. If you want VMs you need processes. Processes are your first step to isolation. This absolutely applies to the level of sophistication - it is extremely hard to break out of a Firecracker VM and the smaller the components in each VM are the less access you gain when you manage to hop between them.
I’d agree with the above comment that a process under the same os is not going to provide sufficient deterrence, especially if it’s the same user which in many cases it is, especially when many default cloud vms allow you to ‘sudo su’, especially when there is a shell with a half-dozen pre-installed interpreters, hundreds of applications, thousands of libraries, etc.
Once you have local access it’s almost game over.
However with this next comment I also agree with. If you wrap each service in a small vm that can only run one and only one program than allow your cloud provider to ensure security at the vm layer you now have a model that is much more resilient. The good news is that this is trivial to do with unikernels now and you don’t need to rely on highly insecure containers (eg: fargate) to perform your scheduling as the clouds will happily take care of the orchestration piece for you.
My point is that if you want isolation you need separate processes. Everything starts with that. “Level 1” is going to be separate processes with their own env vars/ credentials so that an injection in one service (ie: sql injection, or confused deputy type issues - not full RCE) doesn’t get you full access to everything, you can get another big bump by wrapping in containers, another huge bump wrapping in Firecracker, etc.
Fargate is not highly insecure, nor is it a container. It’s built with a container within a Firecracker virtual machine. Firecracker is a highly hardened VM.
I think recommending unikernels is kind of silly at this point tbh. No one runs unikernels, Firecracker/ Containers are far more achievable and provide a massive improvement over everything being in one memory space.
You get zero bump by wrapping in a container. Containers do not contain. In fact because of this false belief you probably get a negative bump. K8S makes this situation worse by extending the traditional vm boundary out across many instances.
Firecracker provisions vms - it is not a vm itself. Also, unikernels and firecracker are not competitive. Plenty of people run unikernels inside of firecracker.
Totally false. Containers absolutely contain and this myth that they don’t is just so silly. Namespaces contain. DAC contains. Seccomp contains. Containers are built on top of security boundaries.
It is literally a VM. It is specifically designed to be a hardened VM. Here is a blog post the company I founded put out about exploiting Firecracker. https://web.archive.org/web/20221130205026/https://www.graplsecurity.com/post/attacking-firecracker
Your linked article states “This blog post covers attacking a vulnerability in Firecracker, an open source micro-virtual machine (microVM) monitor “ which is correct - it’s a vmm - not a vm.
This is a silly distinction in this context. It manages the memory of the underlying guest, it provides device management (huge attack surface), etc. It uses KVM to offload the virtual machine emulation/ execution, yes, who cares? This changes nothing about my statements and it should be clear that Firecracker is certainly not a “highly insecure container”.
Since this is flagged as “off-topic”, I’ll just quote directly from the article:
A major recommendation here is for isolation exactly for the purposes I have described - isolation of critical “mints a cookie” service from “handles an API call” service.