1. 93
    1. 37

      At my former employer, for a time I was in charge of upgrading our self-managed Kubernetes cluster in-place to new versions and found this to eventually be an insurmountable task for a single person to handle without causing significant downtime.

      We can argue about whether upgrading in-place was a good idea or not (spoiler: it’s not), but it’s what we did at the time for financial reasons (read: we were cheap) and because the nodes we ran on (r4.2xl if I remember correctly) would often not exist in a quantity significant enough to be able to stand up a whole new cluster and migrate over to it.

      My memory of steps to maybe successfully upgrade your cluster in-place, all sussed out by repeated dramatic failure:

      1. Never upgrade more than a single point release at a time; otherwise there are too many moving pieces to handle
      2. Read change log comprehensively, and have someone else read it as well to make sure you didn’t miss anything important. Also read the issue tracker, and do some searching to see if anyone has had significant problems.
      3. Determine how much, if any, of the change log applies to your cluster
      4. If there are breaking changes, have a plan for how to handle the transition
      5. Replace a single master node and let it “bake” as part of the cluster for a sufficient amount of time not less than a single day. This gave time to watch the logs and determine if there was an undocumented bug in the release that would break the cluster.
      6. Upgrade the rest of the master nodes and monitor, similar to above
      7. Make sure the above process(es) didn’t cause etcd to break
      8. Add a single new node to the cluster, monitoring to make sure it takes load correctly and doesn’t encounter an undocumented breaking change or bug. Bake for some day(s).
      9. Drain and replace remaining nodes, one a time, over a period of days, allowing the cluster to handle the changes in load over this time. Hope that all the services you have running (DNS, deployments, etc.) can gracefully handle these node changes. Also hope that you don’t end up in a situation where 9/10 of the nodes’ services are broken, but the remaining 1 original service is silently picking up the slack and hence nothing will fail until the last node gets replaced, at which point everything will fail at once catastrophically.
      10. Watch all your monitoring like a hawk and hope that you don’t encounter any more undocumented breaking changes, deprecations, removals, and/or service disruptions, and/or intermittent failures caused by the interaction of the enormous number of moving parts in any cluster.

      There were times that a single point release upgrade would take weeks, if not months, interspersed by us finding Kubernetes bugs that maybe one other person on the internet had encountered and that had no documented solution.

      After being chastised for “breaking production” so many times despite meticulous effort, I decided that being the “Kubernetes upgrader” wasn’t worth the trouble. After I left, is seems that nobody else was successfully able to upgrade either, and they gave up doing so entirely.

      This was in the 1.2-1.9 days, for reference, so though I’d be very surprised things may be much better now.

      1. 33

        tldr; If you can’t afford 6+ full-time people to babysit k8s, you shouldn’t be using it.

        1. 13

          Or, at least, not running it on-prem.

          1. 6

            True, if you out source the management of k8s, you can avoid the full-time team of babysitters, but that’s true of anything. But then you have the outsourcing headache(s) not including the cost(like you still need someone responsible for the contract, and for interacting with the outsourced team).

            Outsourcing just gives you different, and if you selected wisely, less, problems.

            1. 5

              True dat. But every solution to a given problem has trade-offs. Not using Kubernetes in favour of a different orchestration system will also have different problems. Not using orchestration for your containers at all will give you different problems (unless you’re still too small to need orchestration, in which case yes you should not be using k8s). Not using containers at all will give you different problems. ad infinitum :)

              1. 6

                Most companies are too small to really need orchestration.

                1. 2

                  Totally!

        2. 2

          I keep having flashbacks to when virtualization was new and everyone was freaking out over xen vs. kvm vs. VMWare and how to run their own hypervisors. Now we just push the Amazon or Google button and let them deal with it. I’ll bet it 5 years we’ll laugh about trying to run our own k8s clusters in the same way.

          1. 8

            Yeah, this is the kind of non value added activity that just beg to be outsourced to specialists.

            I have a friend who work in a bakery. I learned the other day that they outsourced a crucial activity to a contractor: handling their cleaning cloths. Everyday, a guy come to pick up a couple garbage bag full of dirty cleaning cloth, then dump the same number of bag full of cleans one. This is crucial: one day the guy was late, and the bakery staff had trouble keeping the bakery clean: the owner lived upstairs and used his own washing machine as a backup, but it could not handle the load.

            But the thing is: while the bakery need this service, it does not need it to differentiate itself. As long as the cloth are there, it can keep on running. If the guy stop cleaning cloth, he can be trivially replaced with another provider, with minimal impact on the bakery. After all, people don’t buy bread because of how the dirty cloth are handled. They buy bread because the bread is good. The bakery should never outsource his bread making. But the cleaning of dirty cloth? Yes, absolutely.

            To get back to Kubernetes, and virtualization : what does anyone hope to gain by doing it themselves? Maybe regulation need it. Maybe their is some special need. I am not saying it is never useful. But for many people, the answer is often: not much. Most customers will not care. They are here for their tasty bread, a.k.a. getting their problem solved.

            I would be tempted to go as far as saying that maybe you should outsource one level higher, and not even worry about Kubernetes at all: services like Heroku or Amazon Beanstalk handle the scaling and a lot of other concerns for you with a much simpler model. But at this point, you are tying yourself to a provider, and that come with its own set of problems… I guess it depends.

            1. 2

              This is a really great analogy, thank you!

            2. 2

              It really depends on what the business is about: tangible objects or information. The baker clothes, given away to a 3rd party, do not include all personal information of those buying bread. Also, business critical information such as who bought bread, what type and when is not included in the clothes. This would be bad in general, and potentially a disaster if the laundry company were also in the bread business.

            3. -7

              gosh. so much words to say “outsource, but not your core competency”

              1. 1

                Nope. :) Despite my verbosity we haven’t managed to communicate. The article says: do not use things you don’t need (k8s). If you don’t need it, there’s no outsourcing to do. Outsourcing has strategical disadvantages when it comes to your users data, entirely unrelated to whether running an infra is your core business or not. I would now add: avoid metaphors comparing tech and the tangible world because you end up trivializing the discussion and missing the point.

      2. 3

        As a counterpoint to the DIY k8s pain: We’ve been using GKE with auto-upgrading nodes for a while now without seeing issues. Admittedly, we aren’t k8s “power users”, mainly just running a bunch of compute-with-ingress services. The main disruption is when API versions get deprecated and we have to upgrade our app configs.

      3. 2

        I ahd the same problems with OpenStack :P If it works, it’s kinda nice. If your actual job is not “keeping the infra for your infra running”, don’t do it.

    2. 14

      I don’t know Kubernetes so I’m not speaking from a place of authority here, but whenever I look at it, I feel overwhelmed by the waves of complexity and I think “Is this really the right abstraction to solve this problem?”.

      1. 9

        I’m so happy to see someone ask “is this the right abstraction?”: i find that question unconsidered by so many coworkers, and embarrassingly by myself at times, surely.

        1. 3

          We all have those moments :)

          This becomes especially clear when we realize that we’ve charged ahead using our favorite tool even when its abstractions are a painfully poor match for the problem at hand.

          I suspect kubernes and the abstractions it presents are a great fit for a mega-enterprise like Google, but perhaps a less amazing fit for people who want simple clustering solutions for their medium to middle complex compute infrastructure needs.

          I’d be super curious to hear something who has extensive experience with both Kubernetes and Docker Swarm compare and contrast the two. I don’t know much about Swarm, but I’ve often wondered about it, and whether it presented a more sensical abstraction, but never gained traction because it costs money.

      2. 3

        I just get utterly baffled by it, every time I try and dig deeper, and end up closing the tab, and moving on to something simpler, like Linear A or translating Joyce into Japanese. I do often wonder if my instinctive revulsion is about k8s or my own small brain.

    3. 9

      I feel like the development-side of using Kubernetes is great - you can define your application code alongside it’s manifest files, which easily allows you to spin up new services, change routing, etc.

      However, running a kubernetes cluster requires the same or more effort as a team of ops people running your data center - there are so many moving parts, so many things that can break and just keeping it up-to-date and updating security packages is a challenge in itself.

      Also, for better or worse, you get a sort-of built-in chaos monkey when running kubernetes. Nodes can and will go down more often than a regular EC2 instance just chugging away, for various reasons (including performing upgrades). It also adds some additional requirements from your application (decent, complete health-check + readiness check endpoints, graceful shutdown handling, etc.)

      All in all, I am positive of kubernetes-the-deployment-target and less so of kubernetes-the-sysops-nightmare.

    4. 3

      Well, this is very sound professional advice, and I agree that for small companies k8s is overkill. Still, it seems to me that it misses one key point about Kubernetes: it’s fun devops party time. 🎉

      Edit: less trollish tone.

    5. 3

      I can’t help but to think that at the scale Kubernetes works best at, you end up with the lesser of two [tools] principle. From a non-technical angle if you have a complex system spanning multiple machines, supported by lots of code, contains architectural complexity, and so on and so forth, at least having an open core that a new hire can understand will lower the barrier of entry for new hires to come in and hit the ground running with contributions.

      1. 1

        I once read a great article in defense of Kubernetes that made a similar claim and I wish I could find it.

        Essentially, their point was that doing infrastructure at the scale where you need this tooling means you already have a complex environment with complex interactions. If you think you can just say “oh, we’ll use some homemade shell scripts to wrangle it” you are ignoring the fact that you needed to evolve those scripts over time and that your understanding of them is deeply linked with that evolution.

        I don’t know the first thing about Kubernetes, and still barely do - but when I started my current job I was able to have an understanding about how their software is deployed, what pieces fit together, etc. When I got confused I could find solid documentation and guides… I didn’t have to work through a thousand line Perl monstrosity or try to buttonhole someone in ops to get my questions answered.

    6. 3

      My company has always been highly dependent on the Amazon ecosystem. When we made the move to a microservice architecture three years ago, we opted for Amazon ECS as it was the simplest way to achieve container orchestration.

      With new business constraints, we have to migrate from Amazon to a different cloud provider, breaking away from the Amazon ecosystem, including ECS. We are still a relatively small company and cannot afford to spend months on the infrastructure instead of focusing on delivering on the business front.

      Articles such as this one are a good reminder that Kubernetes is still not an easy solution to implement, and some of the comments confirm my assumption that upgrading and maintaining a Kubernetes cluster can be challenging. It’s a tough call since it’s also the most popular technology around, which helps with recruitment. The absolute certainty is that we no longer want to be tied to a cloud provider, and will choose a technology that allows us to move more freely between providers (no coupling is nothing else than a dream).

      1. 5

        I’d recommend checking out Hashicorp Nomad[0]. It’s operationally simple and easy to get your head around for the most part. A past issue of the FreeBSD Journal had an article on it [1].

        0: https://nomadproject.io/

        1: https://www.freebsdfoundation.org/past-issues/containerization/

        1. 2

          I love Hashicorp. I’ve yet to encounter a product of theirs that didn’t spark joy.

        2. 2

          I’ve talked to more teams switching away from Nomad to Kubernetes than I have talked to people using Nomad or considering Nomad. A common Nomad complaint I hear is that support and maintenance is very limited.

          I’m interested to hear any experience reports on the product that suggest otherwise - My team runs on Beanstalk right now, but I miss the flexibility of a more dynamic environment.

          1. 1

            I assume if you are willing to pay the $$$$$$$$$‘s for the Enterprise version, the support is fabulous. If it’s not, one is definitely getting ripped off. I have no experience with the enterprise versions of any of Hashicorp’s products, we can’t afford it. We also don’t need it. We went in knowing we would never buy the enterprise version.

            We haven’t had any issues getting stuff merged upstream that makes sense, and/or getting actual issues fixed, but we’ve been running nomad now for years, and I don’t even remember the last time I had to open an issue or PR, so it’s possible things have changed in that regard.

        3. 1

          It is indeed one of the alternatives that we have been looking at. My only concerns about Nomad (and Hashicorp products in general) are the additional cost once you use the enterprise features, the smaller candidates pool (everyone is excited about Kubernetes these days), and the absence of load balancing.

          1. 2

            Both Traefik and Fabio works as an ingress load balancer on the (nomad) cluster. We use the Spring Cloud Gateway as our ingress.

            For service to service communication you can run Consul (we do this)

            1. 1

              Seconding fabio as an incredibly simple automatic ingress, and consul service discovery (-> connect+envoy) for service to service. There’s a nice guide as well https://learn.hashicorp.com/nomad/load-balancing/fabio . I’d consider nomad incomplete without consul and vault, but i’d also say the same (particularly vault is irreplaceable) for k8s.

              As for hiring – hire the k8s candidates. They’re both omega schedulers so the core scheduling concepts and job constructs (pod <-> alloc, etc) translate well, and I’d wager for some k8s veterans not having to deal with a balkanized ecosystem and scheduler wordpress plugins (CRDs/operators) would be seen as features.

              docker itself is by far the weakest link for us, but nomad also offers flexibility there.

          2. 1

            My suggestion is just make sure you don’t need the enterprise features :) You can get very far without the enterprise $$$$ expense, that’s what we do. But agreed, the Enterprise version for any of the Hashicorp things are very very expensive.

            Load balancing is easily solved as others have said. We use HAProxy and it lives outside of the nomad cluster.

            Agreed, everyone is all excited by k8s, because everything that comes out of Google(if only the design) must be perfect for your non-google sized businesses. Let’s face it, the chances of any of us growing to the size of Google is basically trending towards zero, so why optimize prematurely for that?

            The upside for the candidate pool being “smaller” is you can read through the Nomad docs in an hour or so, and have a pretty good idea of how everything works and ties together, and can be productive, even as a sysadmin type role, pretty quickly. IME one can’t begin to understand how k8s works in an hour.

    7. 2

      My company started to use an on prem version of K8s shortly after I was hired and it’s been challenging but rewarding too. I think we hit the sweet spot with our company size.

      We have tons of people managing it, but my company is too short term thinking, so we don’t have as much tooling as devs may like. For instance, in our lab environment we are not able to do a pcap. Makes sense, there’s not exactly a “pcap” permission in K8s, but the tooling here always lags behind the “revenue” stuff, since I don’t work for a FANG.