Forgive my ignorance but isn’t the mirrod approach and in particular the ‘reverse mirror’ that they discuss a giant gaping security hole?
I thought that part of the point of things like k8s was sealing your infra off in a way that makes attack more difficult because outside clients simply do not have access to the pods without using the officially sanctioned, monitored and locked down forms of ingres?
I don’t think the point of k8s-like infra is što specifically _seal off _ the infrastructure, as it is to automate all the things you might do with regards to scaling and managing it. It really does offer the sealing, but I se it more as a side benefit then the main point.
That said, I totally get what you’re saying - opening up the pod “and everything it has access to” sounds like a major attack surface.
I think that’s the consequence of everybody using kubwrnetes without fully understanding what it entails. I remember working on services that you “deploy to production” by copying the build artifacts. Then you’d go debug on production as well. I mean, production was only one server. Almost still at the “pet” level (from the meme pets-cattle-insects or however it went). So you knew your “production”, and visiting like this was neither unusual nor unjustified.
But in the decades between, I learned to build my stuff so that it’s pretty well tested before production, and the isolation is all at the service boundaries. So if my api had a problem with a service in a cluster, I could be pretty much certain that it’s either that service, or more commonly, configuration between that service and mine.
And with observability that you get almost out of the box with a lot of stuff these days, it’s usually enough to kube-forward or kube-shell into the pod, confirm your hypothesis and bug out, without the need for some sofisticated tool like the article describes.
That other thing also said, I’ve done stupid things in prod 20 years ago, I’ve done it this week, and I’ll probably be doing it 20 years from now. Sometimes it’s just refreshing to SSH into prod and drop that table by mistake, and then dig for backups for the next three days.
Forgive my ignorance but isn’t the mirrod approach and in particular the ‘reverse mirror’ that they discuss a giant gaping security hole?
I thought that part of the point of things like k8s was sealing your infra off in a way that makes attack more difficult because outside clients simply do not have access to the pods without using the officially sanctioned, monitored and locked down forms of ingres?
mirrordis a dev tool, it’s not intended to be deployed in a production clusterI don’t think the point of k8s-like infra is što specifically _seal off _ the infrastructure, as it is to automate all the things you might do with regards to scaling and managing it. It really does offer the sealing, but I se it more as a side benefit then the main point.
That said, I totally get what you’re saying - opening up the pod “and everything it has access to” sounds like a major attack surface.
I think that’s the consequence of everybody using kubwrnetes without fully understanding what it entails. I remember working on services that you “deploy to production” by copying the build artifacts. Then you’d go debug on production as well. I mean, production was only one server. Almost still at the “pet” level (from the meme pets-cattle-insects or however it went). So you knew your “production”, and visiting like this was neither unusual nor unjustified.
But in the decades between, I learned to build my stuff so that it’s pretty well tested before production, and the isolation is all at the service boundaries. So if my api had a problem with a service in a cluster, I could be pretty much certain that it’s either that service, or more commonly, configuration between that service and mine.
And with observability that you get almost out of the box with a lot of stuff these days, it’s usually enough to kube-forward or kube-shell into the pod, confirm your hypothesis and bug out, without the need for some sofisticated tool like the article describes.
That other thing also said, I’ve done stupid things in prod 20 years ago, I’ve done it this week, and I’ll probably be doing it 20 years from now. Sometimes it’s just refreshing to SSH into prod and drop that table by mistake, and then dig for backups for the next three days.