1. 19
  1.  

  2. 12

    But, what do you gain from running this on k8s? It doesn’t seem to be less administration, maybe a bit more complex to setup and keep up to date. A vps provider that offers redundancy, when a hardware node fails boot the vps on another node via shared storage, would offer the same in this case, since there is no auto scaling or clustering? Or do I miss something?

    1. 4

      The main thing I get is that it’s slightly easier to test IRC bots on my kubernetes cluster. I just have them connect to ircd.svc.ircd.cluster.local:6667. Otherwise, there’s not really any point in this other than to prove to myself that k8s is generic enough to host a git server, my web apps, discord/IRC bots and an IRC server.

      I can also update the config by updating 02_secrets.yml and rehashing the ircd. The big thing for me is that I don’t have to write the code that updates the configuration on the disk, it’s just there for free.

      In theory I could also make this support autoscaling, but I haven’t dug deep enough into ngircd to find out if that’s possible or not.

      Altogether, yes this is kind of dumb, but it works. It was also a lot easier to set up than I thought it would be. The stuff I learned setting this up will become invaluable when I set up a gopher interface for my website (mostly for stripping the PROXY protocol header).

      1. 2

        Nice write-up!

        A few notes on the K8s config:

        For deployments where you want exactly-one instance, using a StatefulSet is better than a Deployment. Deployments creates the new ReplicaSet and pod before shutting-down the old one, which could be confusing to users. Better to be down completely.

        WEBIRC_PASSWORD could be using loading the value from a secret:

        - name: WEBIRC_PASSWORD
          valueFrom:
            secretKeyRef:
              name: config
              key: webirc_password
        
      2. 4

        But, what do you gain from running this on k8s?

        Experience. The author runs a service on a platform (k8s) that guarantees fast recovery if a node, or the service, fails for some reason. At the same time they service, hopefully, a large amount of users and therefore they can see how a service behaves under stress, load, etc and even if the service fails, it is not the most critical thing (unless you sell IRC services).

        So it is a nice exercise.

      3. 2

        Interesting write-up. But I’d expected that if doing all the work on k8s, one might want to set up two instances with at least dns fail over (ie two separate instances joined to the same network) to better support rolling updates when patching?

        1. 2

          I’ll look into that for the future. The main problem there is that ircd is a fickle thing when it comes to server linking.