1. 40
  1.  

  2. 6

    This is actually a solid tutorial. I’ve recently been thinking if Kubernetes might suit us after all. Because what we are doing in practice always didn’t seem to fit Kubernetes very well: instances of our app that need to run on separate infra, often even different regions, currently running on single servers. Translating to Kubernetes pretty much seemed like we would have to map each of those servers we have now to individual Kubernetes clusters.

    But managing a bunch of Kubernetes clusters doesn’t seem any worse than managing a bunch of individual servers? And if we’re already running on single servers, we could turn them into single node Kubernetes clusters for roughly the same price, with GKE masters being free.

    GCE definitely has an advantage in terms of pricing, here. We’re an AWS shop, but EKS is priced $0.20 per hour for the master, on top of your node costs. That’s instantly ~$150 per month added to your bill.

    1. 2

      Translating to Kubernetes pretty much seemed like we would have to map each of those servers we have now to individual Kubernetes clusters.

      You can assign pods to specific nodes in a single kubernetes cluster quite easily. https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

      1. 2

        By default some metadata is associated with each node, for example the region and availability zone. Using that information you can provide an affinity to target only a certain region, or make sure pods are distributed across availability zones.

        You can also add custom taints to nodes, and then add a toleration to a pod to make sure it runs where you want it to.

        At Datadog we built a custom controller (similar to the ip sync code in the blog post) which when handed a custom resource definition would create a nodepool with the requested constraints (node types, SSDs, etc), thus allowing developers to also describe the hardware requirements for their application.

        Paired with the horizontal pod autoscaler and the cluster autoscaler you can go a long way to automating fairly sophisticated deployments.

        1. 1

          But everything I can find about Kubernetes (in the cloud) is that you start it in a single region. Am I missing something? Can you span a Kubernetes cluster across multiple regions, or somehow treat it as one large cluster?

          1. 2

            Yeah that’s true. I think the etcd latency wouldn’t play well multi-region.

            You could still tag the nodes and apply the same config in several kubernetes clusters and then in the other clusters the workload just wouldn’t run.

            Course then you’re going to have the issue that services in one cluster need to talk to services in another. Kubernetes has federation support, but I hear its a work in progress. Istio might be worth a look though.

      2. 4

        I love k8s because I love just having a bunch of yaml files I can just apply and have it work, but gke’s pricing for 4 cores and 8 gigs of RAM was like 2 or 3 billion dollars a month I think, so I went back to crappy scripts and digital ocean. Really hope DOs kubernetes offering ends up being good, because using kubernetes is wonderful but administering it isn’t something I want to do for little side projects.

        1. 3

          You could also use Typhoon if you want something better than scripts. It also supports DO.

          1. 1

            A 3 nodes (n1-standard-1) Kubernetes cluster is ~72$/month. You can even get a 1node k8s cluster but don’t have all the benefits discussed in the OP. Although 3 nodes is still a light cluster, it allows you to have some benefits that you’d not have with 3 crappy servers managed by configuration management (although it would still be cheaper).

            1. 1

              Google has a sustained use discount. I think a 4 core, 15GB machine is 100$/mo. So on the low end its cheaper than digital ocean, but the price ramps up quickly for more computing power. (also pre-emptible nodes are cheaper if you can live with your server disappearing every day)

              I suppose it depends on what you’re trying to do. Their burst instances work well for web apps, especially if you can cut down on memory usage.

              Some competition from digital ocean would be great. I’d probably switch if the price were competitive.

            2. 2

              I’m still a bit worried about the difficulty to “downscale” Kubernetes to one single node, or to a simple 3 nodes cluster with low-end machines.

              GKE documentation says 25 % of the first 4 GB are reserved for GKE: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#memory_cpu

              What is your experience regarding this?

              Edit: Sorry Caleb, I discovered your post here just after having emailed you with the same question :-)

              1. 4

                Just started reading introductory docs and encountered this

                2 GB or more of RAM per machine. Any less leaves little room for your apps.

                So, Kubernetes software uses 2 GB for its own needs? That’s huge amount of memory. I remember php+mysql wikis and forums ran on VMs with 128 Mb without problems, including OS and database.

                1. 3

                  I haven’t tested it myself, but I remember having read this in Kubernetes docs. This is what gave me cold feet…

                  I’d be confused if a regular node would always have this memory requirement. I mean, how would people create a k8s Raspberry Pi cluster then?

                  1. 1

                    I’m confused too. I’m wondering if the requirement in GKE docs is not about optional features like StackDriver and Kubernetes dashboard. I haven’t had the time to test it myself. Curious if someone here knows more about this?

                  2. 3

                    This would only be for the master nodes, which are provided for free on GKE.

                    On several machines that I have they are more around 400m which inclues the kube-proxy (reverse proxy management for containers), flannel (network fabric), and kubelet (containers management). That can seem huge, but it offers guarantees and primitives that php+mysql wiki would use to be easily deployable, and hopefully more resilient to underlying failures.

                    1. 1

                      I haven’t tested it myself, but I remember having read this in Kubernetes docs. This is what gave me cold feet…

                    2. 1

                      This would only be for the master nodes, which are provided for free on GKE.

                      1. 2

                        Are you sure? The part of the doc I linked is specifically about the cluster nodes and not the master.

                        1. 1

                          sorry, wrong thread… It does reserve 25%, which is a safe bet to ensure that the kub-system pods are running correctly.