I know the post is satire but I can’t help put kick my pet peeve.
No, you still need virtualization, because containers don’t provide a full security story just yet. So if you want to run anything in a multi-tenant environment, you need to make sure you can’t escape the sandbox.
The unfortunate thing about this is jails on FreeBSD and Zones on Solaris/Illumos have provided a more secure environment for doing containers than Linux. It’s unfortunate that Linux is winning the public mindshare on this because it has mostly been playing catch-up. SmartOS (Illumos derivative) actually runs the virtualisation solution inside a zone because the zone has better security. With LX-Branded Zones you can run a Linux executable in a container that is running on Illumos, without the cost of virtualisation.
Would you mind going into the difference between jails, zones, and chroot or whatever under Linux?
I was at a meetup last night talking about Docker, and I’m not really that convinced.
Security was a design principle of Solaris Zones. In Docker it’s an afterthought.
Since Docker uses the Linux kernel’s containerization primitives, security issues are probably not with Docker, but with the kernel, no?
I think the point about Docker is portable instances rather than being an ideal jail/containerization.
I don’t know how to box a FreeBSD jail up into a portable image that can be redeployed at will. Do you? If you think you could just hack that up, you’ve now forced all the problems Docker/Rocket solve.
If you use ezjail on FreeBSD, it is actually pretty easy to do ezjail-admin archive, which spits out a tarball that can be copied to another server where you would run ezjail-admin restore or ezjail-admin create -a archive. All the automatic service registration would indeed need to be “hacked up” though.
ezjail-admin create -a archive
That’s pretty cool :)
Thank you for sharing this!
Yeah. But it’s an exciting time! In five years it’ll shake out. :)
Alright, it made me laugh :)
My favorite thing about this is that on other sites (reddit/HN) this obvious advertisement from a company that uses all of this tech to actually provide their service (simplifying this tech) has people who don’t understand or couldn’t work with these technologies coming out of the wood work to ironically rail against them.
It’s all hype. There is only one good advice for picking a software stack - go with the boring technology. You can get pretty far along with ruby, old school server deployments etc. before needing any of the mentioned technologies.
The biggest problem you can impose on yourself is diving in head first into new untested ground to later find out that your database of choice is featured in a ‘Call me maybe’ article.
I do think people should try out the new stuff, don’t get me wrong on that front. Just please build your base product around the known & battle tested stacks and branch out with small experimental projects into the new grounds. That way you will at least have a fallback when the proverbial shit hits the fan.
Well, prostgres is the most boring db I can think of, yet it was also featured on aphyr’s site: https://aphyr.com/posts/282-call-me-maybe-postgres
No doubt about it. My claim was that most startups/companies don’t need a distributed database in the first place. The asphyr’s article you linked exposes problems with multiple DB setup in postgres:
Even though the Postgres server is always consistent, the distributed system composed of the server and client together may not be consistent.
I argue that most startups don’t need more than a single instance plus a failover & a backup solution.
Please take a look at those two really old blog posts:
The author gives his approach on how Digg could have solved it’s performance issues. I agree with a lot this person covered. The main point is: you can go pretty far with a plain old, boring database even when you need to eventually scale it (and at that point you should be able to afford it).