Kubernetes lets you run code in production without setting up new servers
So…this actually true. Someone still needs to setup those servers that k8s will use. In the author’s case, it sounds like this is someone else’s problem. But if you are the developer and admin, you still have to setup servers, and setting up k8s is harder than just runnig puppet or chef on a server up to some scale. I’m not saying k8s isn’t cool or whatever, just that, on net, k8s doesn’t make it easier to setup servers it just makes it easier to make it someone else’s problem.
Exactly. In addition I would say that Kubernetes is not only hard to setup correctly, it’s also quite complex to maintain. There is many moving pieces that, when using a microservice architechture is in part solved by having a system like kubernetes taking care of scheduling and proper life-cycles of your services.
There was many attempts to self-host k8s (k8s on k8s), which would probably solve a big part of the problem, but I don’t seem to see this going forward.
At my $currentjob we’re having trouble going from Ansible managed bare metals to something like k8s because it adds many operational issues with it.
At $job we’re doing a turn-key k8s stack and it’s decidedly non-trivial. Yeah, a developer can cobble together a helm chart and a docker image and it will (okay, might) deploy but that is the proverbial tip of the iceberg.
Is there any equivalent scheduler tool for Free/OpenBSD? To be honest, I like the ideas of kubernetes, but it seems over engineered (for what I need it for anyway).
I was thinking perhaps https://www.nomadproject.io/ would work since it doesn’t depend on docker or containers, but I haven’t tried it yet. The main feature I care about is zero downtime upgrades (Perhaps I should have just used erlang instead of go).
I’ve been using nomad for go services and it works awesomely well. We use it with Docker only, BUT we tried without it and it worked perfectly too (still sticking to docker since we have other stacks and it’s nice to have a common interface).
You definitely can try it very quickly by setting up consul and nomad in dev mode. In addition if doing HTTP, I’d advise you to check ebay/fabio that works just out of the box with Consul.
I didn’t try but I bet those 3 work perfectly fine on BSD.
I want to link nomad but there is a severe lack of “drivers” – if I want to use rkt or docker containers, I’ll just use k8s (and get cool stuff like cilium and a bunch of [usually outdated] documentation)
We use Nomad, and it works well for us. It doesn’t depend on Docker, has drivers for exec, raw_exec, rkt, docker, JVM, etc). It’s pretty easy to turn up and maintain as well, which is a HUGE win over k8s.
Nomad supports zero-downtime upgrades, it even allows you to push N instances of the new version into production, while keeping X copies of the old version running, and then manually approve the new version, and then it will turn off the X old copies and finish turning on the new version.
Ed Schouten ported Kubernetes to FreeBSD… with CloudABI apps. So of course it’s possible to write a runner for jails too. When I finally get around to writing a new cool “Docker-ish-but-without-the-suck-parts” jail management tool, I’ll probably write that as well :D
But yeah, Nomad sounds very good indeed, much less over-engineered.
Kamal: With Kubernetes you can set up a new service with a single command
Julia: I don’t understand how that’s possible.
Kamal: Like, you just write 1 configuration file, apply it, and then you have a HTTP service running in production
The best thing is that’s just the tip of the iceberg - you can describe entire n-tier systems like this, and with helm you can turn that configuration file into a template then publish it for everyone else to use and override where necessary. So say you have a setup with nginx in front of an app server talking to postgres and indexed by elasticsearch, you can write it all down as text, have the user override some key variables (maybe they need to use GCS instead of AWS) then have them install the whole thing with one command. If they need to run it locally, have them pass in a different set of variables and it’s the same command.
When you’ve got it all set up right, making a massive cluster of containers effortlessly spring into life and start talking to each other is just beautiful. Spending hours or days installing and setting up a multi-tier application is going to disappear as kubernetes catches on.
Spending hours or days installing and setting up a multi-tier application is going to disappear as kubernetes catches on.
Maybe. The article even admits how much hand-waving the author is doing and links to “kubernetes the hard way”. Instead you’ll choose between spending hours then days spinning up and configuring kubernetes (make sure you really understand how it works including overlay networks, ingress/egress, security and secrets, etc.) or going with a k8s provider who has people who do that.
I think some of these managed Kubernetes systems like GKE (google’s kubernetes product) may be simpler since they make a lot of decisions for you
I used GKE with Kubernetes to deploy an app earlier this year that had dependencies on Google Cloud Platform services (PubSub, Datastore) and had setup to do each time a node went up or down (adding/removing new/terminated nodes from all existing nodes’ hash rings). Not only did k8s deal pretty darn well with the pod scaling, but GKE even auto-scaled the underlying VM instance groups for me.
There was an initial bunch of stuff to get my head round, partly because of the weird disconnect between “this bit’s GKE” and “this bit’s k8s” (due I think to the k8s team trying to design a genuinely cloud-portable stratum). And not only it was without question spectacularly easier to manage scaling out horizontally with GKE & k8s than it would have been otherwise, it also (a) wasn’t cheap, when you chop it down to “how much for what?” and (b) regularly threw me down a k8s documentation rabbit-hole, which (like this article says) often would have been really hard to handle had I not already had a decent understanding of the networking issues and the Linux implementation of how to get round them. Of course the hard fact is that this stuff isn’t simple, so the complexity has to be dealt with somewhere, and sometimes it’s going to leak through the abstractions, and so the impression the marketing materials give about exactly how easy using the tools will be can seem … over-egged at times.
Ultimately, though, I consider what this enabled me to do, and I think back to the days when I had to get ISPs to provision actual computers and I had to set up the networking and the rest myself, and seriously, this SDN magic is THE NUTS. Yes it’s got a way to go before it’s turnkey and yes the hype overreaches the reality somewhat, but even now I’ll take this over doing it by hand any day. Maybe if I ran my own company and the billing account hit my credit card I’d think otherwise ;-)
So…this actually true. Someone still needs to setup those servers that k8s will use. In the author’s case, it sounds like this is someone else’s problem. But if you are the developer and admin, you still have to setup servers, and setting up k8s is harder than just runnig puppet or chef on a server up to some scale. I’m not saying k8s isn’t cool or whatever, just that, on net, k8s doesn’t make it easier to setup servers it just makes it easier to make it someone else’s problem.
Exactly. In addition I would say that Kubernetes is not only hard to setup correctly, it’s also quite complex to maintain. There is many moving pieces that, when using a microservice architechture is in part solved by having a system like kubernetes taking care of scheduling and proper life-cycles of your services. There was many attempts to self-host k8s (k8s on k8s), which would probably solve a big part of the problem, but I don’t seem to see this going forward. At my $currentjob we’re having trouble going from Ansible managed bare metals to something like k8s because it adds many operational issues with it.
At $job we’re doing a turn-key k8s stack and it’s decidedly non-trivial. Yeah, a developer can cobble together a helm chart and a docker image and it will (okay, might) deploy but that is the proverbial tip of the iceberg.
Didn’t she address that point as well? Getting the kubers into a production ready setup was called out as being really difficult.
Think of it as a layer of abstraction between SWE & SRE.
Is there any equivalent scheduler tool for Free/OpenBSD? To be honest, I like the ideas of kubernetes, but it seems over engineered (for what I need it for anyway).
I was thinking perhaps https://www.nomadproject.io/ would work since it doesn’t depend on docker or containers, but I haven’t tried it yet. The main feature I care about is zero downtime upgrades (Perhaps I should have just used erlang instead of go).
We use k8s at work, and while it works well for us I very much get the feeling that it’s the Erlang version of Greenspun’s tenth law.
I’ve been using nomad for go services and it works awesomely well. We use it with Docker only, BUT we tried without it and it worked perfectly too (still sticking to docker since we have other stacks and it’s nice to have a common interface). You definitely can try it very quickly by setting up consul and nomad in dev mode. In addition if doing HTTP, I’d advise you to check ebay/fabio that works just out of the box with Consul. I didn’t try but I bet those 3 work perfectly fine on BSD.
I want to link nomad but there is a severe lack of “drivers” – if I want to use rkt or docker containers, I’ll just use k8s (and get cool stuff like cilium and a bunch of [usually outdated] documentation)
What drivers are missing from your perspective? especially which ones that you can’t easily accomplish with either exec, or raw_exec?
[Comment removed by author]
We use Nomad, and it works well for us. It doesn’t depend on Docker, has drivers for exec, raw_exec, rkt, docker, JVM, etc). It’s pretty easy to turn up and maintain as well, which is a HUGE win over k8s.
Nomad supports zero-downtime upgrades, it even allows you to push N instances of the new version into production, while keeping X copies of the old version running, and then manually approve the new version, and then it will turn off the X old copies and finish turning on the new version.
Ed Schouten ported Kubernetes to FreeBSD… with CloudABI apps. So of course it’s possible to write a runner for jails too. When I finally get around to writing a new cool “Docker-ish-but-without-the-suck-parts” jail management tool, I’ll probably write that as well :D
But yeah, Nomad sounds very good indeed, much less over-engineered.
The best thing is that’s just the tip of the iceberg - you can describe entire n-tier systems like this, and with helm you can turn that configuration file into a template then publish it for everyone else to use and override where necessary. So say you have a setup with nginx in front of an app server talking to postgres and indexed by elasticsearch, you can write it all down as text, have the user override some key variables (maybe they need to use GCS instead of AWS) then have them install the whole thing with one command. If they need to run it locally, have them pass in a different set of variables and it’s the same command.
When you’ve got it all set up right, making a massive cluster of containers effortlessly spring into life and start talking to each other is just beautiful. Spending hours or days installing and setting up a multi-tier application is going to disappear as kubernetes catches on.
Maybe. The article even admits how much hand-waving the author is doing and links to “kubernetes the hard way”. Instead you’ll choose between spending hours then days spinning up and configuring kubernetes (make sure you really understand how it works including overlay networks, ingress/egress, security and secrets, etc.) or going with a k8s provider who has people who do that.
I used GKE with Kubernetes to deploy an app earlier this year that had dependencies on Google Cloud Platform services (PubSub, Datastore) and had setup to do each time a node went up or down (adding/removing new/terminated nodes from all existing nodes’ hash rings). Not only did k8s deal pretty darn well with the pod scaling, but GKE even auto-scaled the underlying VM instance groups for me.
There was an initial bunch of stuff to get my head round, partly because of the weird disconnect between “this bit’s GKE” and “this bit’s k8s” (due I think to the k8s team trying to design a genuinely cloud-portable stratum). And not only it was without question spectacularly easier to manage scaling out horizontally with GKE & k8s than it would have been otherwise, it also (a) wasn’t cheap, when you chop it down to “how much for what?” and (b) regularly threw me down a k8s documentation rabbit-hole, which (like this article says) often would have been really hard to handle had I not already had a decent understanding of the networking issues and the Linux implementation of how to get round them. Of course the hard fact is that this stuff isn’t simple, so the complexity has to be dealt with somewhere, and sometimes it’s going to leak through the abstractions, and so the impression the marketing materials give about exactly how easy using the tools will be can seem … over-egged at times.
Ultimately, though, I consider what this enabled me to do, and I think back to the days when I had to get ISPs to provision actual computers and I had to set up the networking and the rest myself, and seriously, this SDN magic is THE NUTS. Yes it’s got a way to go before it’s turnkey and yes the hype overreaches the reality somewhat, but even now I’ll take this over doing it by hand any day. Maybe if I ran my own company and the billing account hit my credit card I’d think otherwise ;-)
its got a dope name, that makes it pretty cool