1. 51
  1.  

  2. 18

    Definitely way complicated. Nomad(https://nomadproject.io/) is what we chose, because it is so operationally simple. You can wrap your head around it easily.

    1. 7

      I haven’t used either in production yet, but isn’t the use case of nomad much more restricted then kubernetes? It’s only the scheduling part and leaves it to the use define, for example, ingress through a load balancer and so on?

      1. 10

        Yes, Load balancing is your problem. Nomad is ONLY a task scheduler across a cluster of machines. Which is why it’s not rocket science.

        You say I need X cpu and X memory and I need these files out on disk(or this docker image) and run this command.

        It will enforce your task gets exactly X memory, X cpu and X disk, so you can’t over-provision at the task level.

        It handles batch(i.e. cron) and spark workloads, system jobs(run on every node) and services (any long-running task). For instance with nomad batch jobs you can almost entirely replace Celery and other distributed task queues, in a platform and language agnostic way!

        I’m not sure I’d say the use-case is much more restricted, since you can do load balancing and all the other things k8s does, but you use specialized tools for these things:

        • For HTTPS traffic you can use Fabio, Traefik, HAProxy, Nginx, etc.
        • For TCP traffic you can use Fabio, Relayd, etc.

        These are outside of Nomad’s scope, except that you can run those jobs inside of Nomad just fine.

        edit: and it’s all declarative, a total win.

        1. 1

          Why not haproxy for tcp too?

          1. 1

            I don’t actually use HAProxy, so I can’t really comment on if it does TCP as well, if it does, AWESOME. I was definitely not trying to be limiting, hence the etc. at the end of both of those.

            We use Nginx and Relayd.

            1. 2

              It does TCP. See the reliability and security sections of the web site to see why you might want it.

              1. 2

                Thanks!

      2. 4

        Oooh, the fact that it’s by HashiCorp is a good sign. I’ll have to read up on this. Thanks!

      3. 12

        I recommend the episode of DevOps Cafe with Kelsey Hightower discussing about this complexity.

        The whole idea is that in the enterprise world, processes and workloads are différents, no 2 différent companies have the same constraints, and k8s is answering this with much complexity and flexibility.

        In this episode, John Willis is wishing that something as simple as a docker compose file would be enough to describe applications, wish that Kelsey answers to with few examples of popular demands that cannot be expressed at all with a compose file.

        I strongly recommend the podcast and this episode.

        1. 13

          Link to the episode for the lazy: https://overcast.fm/+I_PQGD1c

        2. [Comment removed by author]

          1. 5

            Nix images that run processes inside sandboxes (chroot + cgroups.)

            Sounds amazing! Do you have a write up of how you do that?

            1. 3

              At the risk of me-tooism, I’d really like to hear more about that too, if possible :-)

              1. [Comment removed by author]

                1. 1

                  Thanks!

            2. 2

              I’d love to do that but this doesn’t solve the bin-packing of services that I need at scale…

            3. 9

              Maybe I’m just getting old and cranky but it seems like everything takes vastly more work, time, and ceremony than it used to back in the dark ages of pets-not-cattle and mutable infrastructure, with no real improvement in reliability or cost. (I know the customary response is that we’re solving harder problems at greater scale, and of course some folks are, but a lot of us are working on problems and at scales not that different than ones we were working on ten years ago.)

              1. 8

                At the scale I operate at, this is definitely true. I used to run jobs on local university clusters, but eventually moved to Cloud stuff out of necessity. It was the thing everyone was doing, and in the face of declining support for local clusters, it was the best way to quickly spin up a cluster-like computing environment. But recently, I was given access to an old-fashioned university cluster again, with traditional job-submission tools, and it’s been great. I can submit a job that runs across 64 CPUs with almost no configuration. I don’t manage any infrastructure! There are no containers! I love it.

                1. 5

                  I think containers have been pretty badly pitched in some cases because people end up seeing containers as VMs, when in fact they’re more like virtual filesystems for a program + some isolation. Like containers can be extremely lightweight

                  If you are able to set up the container infrastructure properly you end up being able to isolate a lot of tricky components and share this configuration. This actually isn’t much of an issue with newer programs (I don’t understand people who put Go programs into containers…).

                  But, if you have software that depends on things like the host system’s font stack (PDF rendering, for example), or something that needs to be running with an old set of libraries (but you don’t want to pollute the host system with this stuff) containers work extremely well. For certain purposes, the isolation lets you provide (basically) single binaries to get things working and destroy the “works on my machine”-style issues in a lot of scenarios.

                  A bit ironically, containers are great for old software and a lot less useful for newer software.

                  EDIT: also, RE mutable infrastructure… even when you get to a relatively small setup (like 8 or so servers), it’s extremely easy to start having issues where your configuration is desynced from the reality in mutable infrastructure land. Trying to recover a server’s state after a reboot, only to realise that you did a one-time fix when you first deployed your software 2 years ago and it got lost in the reboot is really rough.

                  kubernetes is complated for sure, but if you really get into the headspace of something like salt stack it can feel nicer. There’s a big learning curve but after you get it, it can even be faster even for a single server.

                  1. 3

                    Well you’ve still got to choose tech appropriate for your problem, that problem will never go away :)

                  2. 7

                    I jumped on the K8s train moderately early, and have since jumped right back off owing to the rapidly accelerating unnecessary extra complexity.

                    I’m sympathetic to the idea that enterprise requires a sometimes bewildering array of configuration options, and that the usual array of systems-screwer-uppers (e.g., Red Hat, IBM) were naturally going to show up to the party to try to increase consultant billing time, but man did that thing get messy and confused in a hurry. It almost makes you sympathize with the go development philosophy.

                    1. 3

                      It feels like the K8s train replaced the OpenStack train.

                      1. 2

                        Now consider that there are organization that deploy OpenStack on a hypervisor, then kubernetes on that openstack :)

                      2. 2

                        LOL I couldn’t agree more. “systems-screwer-uppers” I hadn’t heard that before. beautiful turn of phrase!

                      3. 7

                        It totally depends how you approach it.

                        K8S on GKE is a breeze.

                        K8S on aws w/ kops is a nightmare, but doable.

                        We ended up taking most the features of kube away from developers and picked sane defaults for people and called it a day. You don’t need every spanner in the toolbox, but the guts of K8S (the scheduler, kubectl etc.) are great if you can separate the wheat from the chaff.

                        1. 10

                          I know this is not a super helpful comment but I semi-believe it: I think k8s at this point is likely just a funnel to GCP/GKE.

                          1. 2

                            Except that Red Hat and IBM are investing heavily in Kubernetes, and they don’t get anything from Google. GKE is Debian.

                            1. 1

                              But they are the go-tos for “we can’t / won’t use Google / cloud” so there’s room on the gravy train for them, too.

                              1. 1

                                I’m not sure how that conflicts with my claim above. If your competitor has a successful funnel, a reasonable strategy is to piggy-back on it. It doesn’t mean my claim is true, just that your statement doesn’t counter it at all.

                          2. 4

                            I’m in agreement that it’s complicated- partially because the problems are complicated and partially because the underlying clouds are complicated- it’s spot on about the introduced complexity. What I’m not certain of is that it’s better than other container orchestration schemes for most places. I’ve spent about two years working with it daily but less than a few days test deploying all of Apache Mesos, Docker Swarm, and Hashicorp Nomad. The number of sharp edges, hard experiences, reading and re-reading documentation and cloud provider documentation, re-examining behavior, then the code, and yelling and screaming at my keyboard moments was high. I’ll be interested in seeing how the ease of operationalizing will play out and how the reality of having to know all the layers below to triage and debug will turn out for most organizations.

                            1. 3

                              There is still a steep learning curve! But that skill set is now valuable and portable between environments, projects and jobs.

                              I’ve yet to use k8s entirely because it seems like complexity that I do not need right now, but this point seems like a good one to me. Of course every system is still unique, so I’m not sure how far that goes.

                              1. 3

                                Using Nomad + Backplane strikes the right amount of simplicity and sophistication for me.

                                1. 2

                                  Could you share the rationale of using Backplane instead of other solutions?

                                  1. 1

                                    By using backplane I’m basically outsourcing the service discovery and routing to a single centralized service. This drastically simplify deploying simple http services from (potentially) different platforms: Heroku, Digital Ocean, AWS, etc. I can deploy without “releasing” new versions and slowly migrate traffic from one version to another. No security group, firewalls to configure thanks to the built-in reverse proxying. It’s a commercial centralized service, but that’s a trade off I’m willing to make for small scale projects.

                                2. 3

                                  Yes. 95% of all usages of K8S are plain silly. Like hunting mosquitos with machine guns.

                                  https://www.youtube.com/watch?v=cZvT3MHpffk

                                  “There’s nothing more dangerous than a half-configured Kubernetes cluster”

                                  Nomad is way better (like one commenter already mentioned), but sometimes I dream of openbsd installations so I wouldn’t have to worry about any of this crap.

                                  1. 1

                                    but sometimes I dream of openbsd installations so I wouldn’t have to worry about any of this crap

                                    How OpenBSD would change anything on this topic?

                                    1. 1

                                      Mainly that containers are non-existent.

                                      1. 1

                                        I get it now ;-)

                                        Then how do you isolate from each other programs living on the same server, especially when they use different versions of the same dependencies? Do you use something similar to Solaris’ Zones, FreeBSD’s jails, or Nix?

                                        1. 2

                                          Yes, those are pretty much the questions I ask myself as well, and the the dreaming stops ;) OpenBSD has techniques that hover around these topics (like vmm and tons of admittedly cool low-level security features like pledge), but nothing really for containerisation (except chroot).

                                          1. 2

                                            I am sad. I hoped you had found something I haven’t to avoid Docker complexities ;-)

                                  2. 3

                                    I was one of the people primarily responsible for setting up Kubernetes at work, and there are definitely pros and cons. Bootstrapping everything, getting networking running, making sure you’re running the right versions, what flags to set for your api-server and kubelets, etc. is all a lot of trial and error. We were additionally constrained in that we could not use any of the “built-in” clouds such as Azure or GCE, and the documentation is lacking for the from-scratch versions. One of the things that really helped was Kelsey Hightowers “Kubernetes the Hard Way”: https://github.com/kelseyhightower/kubernetes-the-hard-way.

                                    That said, for developers using kubernetes, it is pretty neat. You get free service routing, namespace abstraction, automatic configurable rollover deployment, a vault for secrets and more. Role Based Access Control gives you fine-grained control over who can access what, where and how (with the headache that most of the resource names and verbs are not listed anywhere). If you are already using Docker images (in whatever capacity), the jump to kubernetes is a 30-line YAML file and a few commands, and being able to “bundle” containers in pods is one of the better ideas kubernetes got.

                                    Personally, the biggest frustration is the blazing pace of everything. What was best practice yesterday is deprecated tomorrow, in an effort to try everything and settle on the “best” solution. If I could fast forward kubernetes development 2 years and then use it then, I think it would be a much more smooth experience.