1. 19
  1. 51

    I don’t believe Kubernetes and its primitives are a good fit for most companies out there. The article mentions Kubernetes, but what they are really saying is, you should make your applications containerizable and compatible with orchestrators.

    You should save yourself the pain and burden of using Kubernetes, and try out orchestrators like Nomad (from HashiCorp).

    1. 15

      I could not agree more. Any POC with k8s we did failed. I am not sure what it is for. The workloads we have perfectly fine on VMs with auto-scaling and there is no need for a 1M+ LOC solution that is looking for a problem.

      1. 7

        This is the tune I’ve been singing for a while, but I think I’m on the verge of changing it. Not because I suddenly think Kubernetes is a fantastic fit for every possible application on the planet, but because the “Kubernetes and containerization and execution environment are just different words for the same exact thing” viewpoint is becoming such an overwhelming consensus in the industry that I suspect it’s shortly going to get hard to hire people who know how to deploy code to anything else.

        It is really feeling to me like I’m dying on the wrong hill by insisting that it can be fine to use Nomad or Docker Compose or systemd to run a service at small scale. Kubernetes seems to just be winning everywhere, k8s-only tools are starting to show up more and more often, and at some point it’s going to get more expensive than it’s worth to be the odd man out.

        Makes me sad, but it’s not the first time the industry has converged on a technology that isn’t my first choice.

        1. 14

          I am still on the “keep it simple” team and I am working on improving integration of Erlang projects with systemd. It makes deployment and configuration much simpler IMHO. I do not need to think about ingress, pods, etc. whatever there is in k8s. I just define my application to listen on port, and it will be started only when needed. I do not need anything more than that.

          1. 3

            Yup yup.

            Leaning heavily on systemd units allows for a heck of a lot of ‘separation’ if you need to run different environments/versions on the same OS instance.

            A (now) reasonably long term client originally contacted me asking about ‘setting up Dockers’ a few years ago, and the end result is that we avoided Docker/Containers (the business is essentially one website), but modernised their setup for all infra, to use:

            • (a) 100% reproducible setup (using versioned shell scripts mostly, and a bit of Make)
            • (b) modernised the ‘runtime’ to use systemd extensively, using a combination of instance templates, and generated unit files
            • (c) moved local dev to a Vagrant workflow, with a custom base image built using the same scripts prod/staging/test are built with.

            Sure, containers could have been applied to the problem, but that doesn’t mean it’s actually any better at solving the problem. You can bang a nail into a piece of wood with the back of an impact driver if you really want to. That doesn’t mean a simple hammer won’t do the job just as well.

      2. 16

        If you’re building a new app today, the kind of stuff you will need from day one for your service (excluding data persistency) is some load balancing and blue/green deployments. Shoving a full Kubernetes cluster just for that is really overkill.

        And you’re relying on the Cloud(tm) to have a working Kubernetes cluster with a few clicks. if you’re using AWS anyway, just get an Elastic Beanstalk or any equivalent that gets a docker container up and running, load balanced and easily updated using some script.

        All this, remaining cloud-agnostic, too. It’s just a docker container, literally every big cloud provider has some sort of “deploy a container as a service” and as a last resort you can have a VPS with Docker installed in no time.

        1. 5

          We migrated our ~10ish services from Beanstalk to ECS in two weeks. Beanstalk deploys started to fail once we got to 1000 containers for mysterious, AWS-internals related reasons that required asking AWS support to un-wedge things on their end. Migration was pretty painless since everything was already containerized.

          If your service is stateless, runs in a container, and speaks HTTP, it’s pretty easy to move between the different orchestrators. Your deploy step needs to write out a slightly different deploy JSON/YAML/… and call a slightly different orchestrator CLI, maybe you need to do some one-time load balancer reconfiguration in Terraform. Far easier than getting apps used to stateful deploys on bare boxes with tarballs into containers in the first place.

          1. 3

            I’ll add my own anecdote: I migrated a stack of 5 services, with secrets and other related things, from EKS to ECS in ~3 days. The biggest single obstacle I ran into is that ECS’s support for secrets isn’t as featureful as k8s’s; specifically, ECS doesn’t have a built-in way to expose secrets in the container filesystem. But I found an AWS sample showing how to achieve this using a sidecar container; here’s my fork of that code.

          2. 1

            AWS is really the odd one out, though; I had once had one Kubernetes description instantiated on four different clouds (Google GKE, DigitalOcean, MS Azure, IBM Bluemix) with only the cluster names and keys changed.

          3. 10

            I highly disagree with the concept, but I thought that it may be a good idea to post it there as a discussion starter about place of k8s in the industry.

            1. 7

              Containers are great. They are like universal executables for servers. But Kubernetes can, in my experience, be a major hassle with little benefit.

              1. 6

                No, thank you. After watching three companies shooting themselves in the foot by buying into this kind of snake oil and chasing the mythical man month in kubernetes land, my experience has been the opposite of what this article describes.

                The claims in the link are not backed by evidence. Spinning as many new environments as you want, apps being portable, etc. Are pipe dreams which absolutely do not materialize in practice. And writing dozens or even hundreds of repetitive deep nested yaml files insn’t a small effort by any measure.

                What you end up with is an amalgamation of certificates, yaml files that need to be tracked and that are not handled in a stateless manner, references to external services, service dependencies. Opacity, complicated logging set ups and a whole lot of non sense that will for sure keep your job safety rock solid.

                Kubernetes might make sense for massive deployments (thousands or phisical machines), otherwise I wouldn’t touch it with a ten foot pole. I am not 100% sure these articles aren’t written with that intent of making the competition fall for this enormous pitfall.

                1. 15

                  Conspiracy theory: This is advertisement for Kubernetes made by either people that get paid to do things with Kubernetes or are apologists for Kubernetes.

                  1. 7

                    I wouldn’t call that a conspiracy theory but merely talking about the author biases.

                    1. 4

                      …Kubernetes was an inside job?

                    2. 5

                      As others have cited from a slightly different angle, the author of this article isn’t doing a very good job of signposting the content with their choice of title.

                      I’d be much more bullish about something like “Design your architecture in an infrastructure neutral way as much as possible so you can easily scale and adapt as new solutions evolve”.

                      Containers are the new hotness to be sure and seem to be swallowing large swaths of the industry, but they’re not perfect by a long shot, and I could easily see some other OTHER paradigm coming out in a few years and stripping much of the shine from their brand.

                      Also, it’s interesting that the title says “Kubernetes” but that the author REALLY means “managed kubernetes” - I know it’s detail but it’s an INCREDIBLY IMPORTant detail!

                      1. 8

                        Actually the main thing I took away from that article was, “Don’t develop on Windows.”

                        1. 1

                          Yeah this is a continuing WTF for me.

                          I occasionally need to provide tooling support to developers for one of my clients; it’s always the most idiotic basic issues with those who use Windows: stuff like “tool XYZ doesn’t work properly because git on windows defaults to converting all the standard unix newlines into windows newlines because $REASONS”

                        2. 3

                          If the thrust were “Why you should DESIGN for Kubernetes from day one” I’d agree. If you do most or even some of the following: work out a containerized build, figure out how to manage deployment, puzzle over what and where to keep state and how to manage/recover it, consider load balancing and scaling, spend some time thinking about roles and entitlements, put in some time on dealing with logging and metrics, then you’ll be in good shape for any platform including rolling your own on VM or bare metal. If the answer to all of those is “I’ll use Kubernetes (and half of the CNCF projects)” you’re going to have a bad time. The complexity doesn’t just go away.

                          1. 4

                            No, you (probably) should not.

                            1. 2

                              Does SO itself strongly reject this advice? For a long time they were proud of bucking the trend of cloud-scale deployments and just managing some small amount of beefy servers themselves.

                              Edit: nvm I see they cover it in the article

                              1. 3

                                For those who don’t bother reading the article: when StackOverflow launched, Azure was brand-new, Windows didn’t support containers, and Kubernetes didn’t exist. I remember as late as roughly 2012, the Stack team told the Azure team what they’d need in hosted SQL Server alone to go entirely to the cloud, and the Azure team effectively just blanched and dropped it. (I worked at Fog Creek at the time, and we’d do beer o’clock together fairly often and swap stories.)

                                It’s obviously an entirely different ballgame in 2021, so they’re allowed to change their minds.

                                1. 8

                                  It’s obviously an entirely different ballgame in 2021, so they’re allowed to change their minds.

                                  I think a lot of people here and HN would argue that the trend over the past decade has not been good, and the old way really was better. They say the move toward containers and cloud services is driven by hype, fashion, and particularly cloud vendors who have something to sell.

                                  I admit I was sad to learn that Stack Exchange is no longer bucking the trend. Yes, I make heavy use of cloud infrastructure and containers (currently Amazon ECS rather than k8s) in my own work, and have often publicly defended that choice. But the contrarian, anti-fashion streak still has its appeal.

                                  1. 2

                                    Most of the contrarians are wrong: the old systems were not better, simple, easy to learn, or easy to use. The old way of every place above a certain size and/or age having their own peculiar menagerie of home-grown systems, commercial products, and open source projects bodged together hopefully with teams to support them just was how it was. But you got used to it and learned the hidden or half-documented footguns and for a time there weren’t better choices. Now there are choices and a different landscape (you don’t have to run your own DC), and many of them are very good and well-documented, but they’re still high in complexity and in depth when things go wrong. I think the greatest advantage is where they interact with home-grown systems they can be cordoned and put behind common interfaces so you can come in an say, “Oh, this is an operator doing xyz” and reason a little about it.

                                    1. 6

                                      I think there are just as many – or more – footguns with the overly deep and complex virtualization/orchestration stacks of today. In the “old days” senior developers would take time to introduce juniors into the company culture, and this included tech culture.

                                      Nowadays developers are expected to be fungible, tech cultures across companies homogeneous, and we’ve bought into the control mechanisms that make this happen, hook line and sinker. </end-ochruch-poast>

                                      1. 1

                                        Nowadays developers are expected to be fungible, tech cultures across companies homogeneous, and we’ve bought into the control mechanisms that make this happen

                                        Lol why is this bad? Imagine if your doctor wasn’t fungible. You’re ill and your doctor retires, well, time for you to not have a doctor. Moreover homogeneity leads to reliability and safety. Again imagine if each doctor decided to use a different theory of medicine or a different set of instruments. One of our doctors believes in blood leeches, the other in herbal medicine, which one is right? It’s always fun to be the artisan but a lot less fun to receive the artisan’s services, especially when you really need them.

                                        1. 3

                                          Lol the doctors whom I trust with my health (and life) are not fungible, because I know them and their philosophy personally. If I want blood leeches I will go to the doctor who uses them well.

                                          1. 1

                                            So what if they retire or get sick themselves? What if you get sick badly somewhere away from your home? Also, do you expect everyone to do this level of due diligence with all services that people use?

                              2. 2

                                The summary says it well: containerize from day one, else it would be difficult to re-factor later on. k8s is just one tool to help in containerization.

                                Articles that proclaim “on true way” are a reflection on the author’s inexperience in the software world.