1. 23
  1. 6

    This is a really neat write-up!

    I’ll admit I’ve been rather avoiding Kubernetes and am just barely beginning to get cozy with things like docker-compose and the like, and this article is making me think I should reconsider that choice!

    1. 6

      I recommend looking into hashicorp’s nomad

      1. 1

        I adore Hashicorp software, but it would depend upon the goal of working with k8s, wouldn’t it?

        If the goal is to deploy a technology as a learning experience because it’s becoming an industry standard, as awesome as I’m sure nomad is, it’s not going to fit the bill I’d think.

        I’m still blown away all these years later by Terraform and Consul :) Those tools are just amazing. True infrastructure idempotence, the goal that so many systems have just given up on entirely.

        1. 4

          To be clear: if your goal is to learn k8s–which is fine; it’s a very marketable skill right now, and I’m 100% empathetic with wanting to learn it for that reason–then I think it makes sense. But for personal use, Nomad’s dramatically simpler architecture and clean integration with other HashiCorp projects is really hard to beat. I honestly even use it as a single-node instance on most of my boxes simply because it gives me a cross-platform cron/service worker that works identically on macOS, Windows, and Linux, so I don’t need to keep track of systemd v. launchd v. Services Manager.

      2. 4

        Don’t, just don’t… I am trying to avoid k8s in in homelab to reduce the overhead. Since I don’t have a cluster or any feature in k8s that’s missing in a simple docker (-compose) setup

        1. 5

          It depends on what you call your “lab”. A couple of years ago I realized that there’s only one way I master things: practice. If I don’t run something, I forget 90% about it in 6 months.

          My take on the homelab is to use as much overhead as possible. I run a bunch of static sites, an S3-like server, dynamic DNS and not much else, yet I use more stuff/overhead to run it than obviously necessary.

          The thing is, I’ve reached a point where more often than not, I’m using the knowledge from the lab at $WORK, even recycling some stuff such as Ansible roles or Kubernetes manifests.

          1. 6

            I believe this to be the differentiation between a homelab and “selfhosted services”. The purpose of a homelab is to learn how to do things. The purpose of selfhosted services is to host useful services outside of learning time. That is not to say that the two cannot intersect, but a homelab, in my opinion, is primarily for learning and breaking things when it doesn’t affect anything.

            1. 2

              Yup I think this is the key.

              I’m already using docker-compose for my actual self hosted services because it’s simple and easy for me to reason about, back up the configuration of, etc etc.

            2. 3

              Agreed, it certainly comes with a rather large overhead. I use Kubernetes at work and rather enjoy it. So, it’s great having a lab environment to try things out in and learn, so that’s why I bother hosting my own cluster.

            3. 3

              I started with docker-compose as I began to learn containerized tech, but transitioned to Kubernetes because the company wanted to use it for prod infrastructure. I actually found that K8s is more consistent and easier to reason about. There are a lot of concepts to learn, but they hang together.

              Except PersistentVolumeClaims.

              1. 2

                Thank you for reading. I’m glad you enjoyed it :)

                I’ll say, picking up Kubernetes at home is a good choice if it’s something you want to learn. It’s really useful to have a lab environment to try things out and build your knowledge with projects.

              2. 4

                What’s your storage like?

                I believe we’re pursuing the same goal, I’ve been building cloud-like infrastructure at home for the last ~3 months and our setups are kind of similar.

                I run MetalLB on top of Kubernetes, Kubernetes on top of Proxmox VMs, and Proxmox on top on 3 NUCs. Storage is provided by a Ceph cluster that runs on those 3 NUCs and seamlessly connects to Proxmox and Kubernetes.

                It’ll be nice hearing how you run yours and sharing our findings.

                1. 2

                  May I ask why you run Kubernetes on top of Proxmox instead of bare metal? Just curious

                  1. 2

                    Great question! The answer of which is: flexibility.

                    Try running Wireguard on Kubernetes. Yeah, you can, but it’s an antipattern. Or if and when I want to get into Microsoft envs.

                    Some stuff can’t run on Kubernetes, and so I didn’t want to give up that flexibility. The overhead is minimal, Proxmox VE is just KVM with a clustering engine and a web interface.

                    1. 1

                      Thanks!

                  2. 1

                    I’ve been looking into doing something similar to what you are describing here and I’m curious on the storage side as well. How much storage are you able to squeeze into the NUCs doing this?

                    1. 2

                      Right now you can probably store up to 4-5TiB of M.2+SATA into a single NUC without breaking the bank. Times 3 if you run 3 nodes. A 3-node Ceph cluster’s replication/erasure will reduce that by ~20-40%, depending on your params.

                      Right now, my 3 NUCs provide a Ceph cluster with 1TiB SATA SSD each, for a raw total of 3TiB. Usable storage must be around ~1.8TiB based on my configuration. I don’t need much more to be honest, and I can always add more nodes if it gets tight.

                      My setup goes against many of Ceph’s best practices, but I’m lacking the hardware to run a perfect cluster. My replication levels are lower than should be, my network is 1GbE, etc.

                      That said, I’m very happy with it. Haven’t had any trouble for months, it’s fast enough, provides some sort of high avail., and its interfaces connect natively to Proxmox VE and Kubernetes, among others.

                      If you want extra resources, Mastering Ceph is great, binged through it when setting up the cluster.

                      1. 1

                        Mind if I ask what you paid for that hardware in total? 3 NUCs would that be around $600? Plus another few hundred for storage? Or are you sourcing used gear at substantially cheaper than that?

                        1. 2

                          More like ~1400€ as I maxed out ram, and SSDs are Samsung EVOs. I use those nodes not only for storage but also virtualization.

                          :$

                          1. 1

                            Ah cool! Which hypervisor are you using? ~1.5 grand doesn’t feel unreasonable for a home lab with 3 physical nodes and a generous helping of storage and RAM.

                            1. 2

                              Proxmox VE cluster atop the 3 NUCs, 96GB RAM total, right now at ~50% saturation.

                              1. 1

                                I’m looking at options at the moment. If you were setting the lab up again today would you still use Proxmox VE? Good experience overall?

                                1. 1

                                  If I had enough time, I would’ve loved to use OpenStack (haven’t used it yet, but is a skill I want to acquire) but it was blocking me for too long. Otherwise, I would use Proxmox VE again.

                                  Its design is simple enough that I can run it atop a bunch of Debian servers while running other stuff (such as Ceph).

                                  1. 1

                                    Otherwise, I would use Proxmox VE again

                                    Thanks for the rec!

                                    run it atop a bunch of Debian servers

                                    Am I reading wrong or are you saying Proxmox runs inside debian? I got the impression from the website it was its own distro.

                                    1. 1

                                      The distro they distribute is just Proxmox VE installed atop a pre-configured Debian.

                                      You can actually have Debian installed first, then install Proxmox on top of it: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster