As a lot of people say in this industry, boring tech is what makes money. This is a well accepted principal in many operations teams.
That’s kind of an “interesting” retort. Like we here at Rainbow Ponies aren’t into making money, so we’re picking Docker? Makes it sound like professional malpractice.
I caught that too. The funnier thing is the problem mentioned doesn’t happen for the “boring” solutions like VMWare, Xen, Solaris Containers, etc. They get the job done with various benefits such software claims. They have tooling to aid management/deployment. They can also run whatever images you put in them from minimal apps all the way up to full OS’s. If success of last requirement being random is acceptable, there’s also this new thing called Docker you can use in place of proven solutions. It’s a great pitch!
Again, well accepted principle that “thou shalt not run a database inside a container”. Don’t do it, end of story.
This was not obvious to me. Can anyone here explain why? All I can think of is potential slowness due to i/o indirection, but that doesn’t seem like a dealbreaker.
I’m not sure it’s an inherent limitation, but there is a mindset that containers are disposable because good grief, they’re cattle, not pets. This then bleeds over into the tooling. Like there’s no need to confirm a destructive operation because it can only be ephemeral state because that’s what containers are for.
On another angle, a lot of people are using containers for easier deployment or resource management. They’re isolating the applications from the OS itself. Many companies already put databases in virtual appliances for this reason. I don’t see why they wouldn’t put them in containers. They should be replicating the database anyway. Just need to label it somehow so people know to be careful with it. Or the container management has a rule that certain containers must be handled manually by an admin so a script won’t do something stupid.
In my experience, storage is the main issue. If a container running the database is disposable and can be killed and then restarted on some other host, how does the database (re-)connect to the storage? There are solutions out there, but there doesn’t yet seem to be agreement on the best way to do this. Every storage vendor seems to have their own solution, but it doesn’t yet “just work”, at least as far as I’ve seen.
That’s kind of an “interesting” retort. Like we here at Rainbow Ponies aren’t into making money, so we’re picking Docker? Makes it sound like professional malpractice.
I caught that too. The funnier thing is the problem mentioned doesn’t happen for the “boring” solutions like VMWare, Xen, Solaris Containers, etc. They get the job done with various benefits such software claims. They have tooling to aid management/deployment. They can also run whatever images you put in them from minimal apps all the way up to full OS’s. If success of last requirement being random is acceptable, there’s also this new thing called Docker you can use in place of proven solutions. It’s a great pitch!
This was not obvious to me. Can anyone here explain why? All I can think of is potential slowness due to i/o indirection, but that doesn’t seem like a dealbreaker.
I’m not sure it’s an inherent limitation, but there is a mindset that containers are disposable because good grief, they’re cattle, not pets. This then bleeds over into the tooling. Like there’s no need to confirm a destructive operation because it can only be ephemeral state because that’s what containers are for.
On another angle, a lot of people are using containers for easier deployment or resource management. They’re isolating the applications from the OS itself. Many companies already put databases in virtual appliances for this reason. I don’t see why they wouldn’t put them in containers. They should be replicating the database anyway. Just need to label it somehow so people know to be careful with it. Or the container management has a rule that certain containers must be handled manually by an admin so a script won’t do something stupid.
In my experience, storage is the main issue. If a container running the database is disposable and can be killed and then restarted on some other host, how does the database (re-)connect to the storage? There are solutions out there, but there doesn’t yet seem to be agreement on the best way to do this. Every storage vendor seems to have their own solution, but it doesn’t yet “just work”, at least as far as I’ve seen.