1. 22
    1. 6

      Why do you need containers for running a single binary? I suppose it would make sense if everything else in the stack is running in containers and you want to create isolated networks, but part from that, idk. Perhaps someone could enlighten me?

      1. 5

        I can think of two reasons. The first, as you say, is that you have some infrastructure for deploying containers, making things look like containers is useful. The second is that there’s often more to a system than just the program. You probably have configuration and data. The configuration is easy to add as a new container layer. The data may be stored in a separate service (e.g. accessed via the network) but if it’s filesystem based the container orchestration tools provide a good way of attaching filesystem trees to containers.

        There’s also a bad reason: security. People often conflate containers (a distribution and orchestration model) with a specific isolation mechanism. On Linux this is a mess of cgroups, namespaces, and seccomp-bpf that keeps having security holes. On most cloud platforms, it’s VM isolation because the shared-kernel model doesn’t give sufficiently strong guarantee for tenant isolation.

        There’s also a silly but real argument: cost. A lot of cloud providers have container billing that does finer accounting of memory and CPU usage than their IaaS (VM) services and so running a program on their container infrastructure is cheaper than running it on a rented VM.

        1. 4

          Security is a really good reason. The “security holes” you’re talking about are kernel exploits - not enough for tenancy, certainly, but definitely nice given that putting something in a container is virtually free.

          That said, it’s worth noting that this is a build tool.

          1. 3

            For “security” using containers is not needed as you can have that on Linux without all that fuss. Just deploy binary and systemd’s unit for your service (which in case of single binary can be even within your binary with just single command away) and you are good to go. Much less stuff needed, and also this can give you some additional features that not all container runtimes provide.

            1. 1

              If you want to use systemd, go for it. Obviously there’s nothing you can do with a container that can’t be done natively - but if you’re already using containers there’s some good stuff that you get “for free”.

            2. 1

              Putting something in a container and checking the ‘isolation’ box on a cloud provider (gvisor on GCP and IIRC firecracker on AWS) is a lot easier than managing Linux hosts and configuring all of the security/isolation stuff yourself.

          2. 2

            Security is a really good reason. The “security holes” you’re talking about are kernel exploits

            They are sometimes kernel exploits, they are sometimes misconfigurations in policy. For example, there was one that I recall that was caused by the fact that the Docker seccomp-bpf policy didn’t default to deny and so a kernel upgrade added a system call that allowed a container escape. Sometimes they’re exploits but, importantly, they’re exploits relative to a threat model that 99% of the kernel code was never written to consider. The Linux kernel still doesn’t really have a notion of a jailed process (unlike the FreeBSD or Solaris kernels) and so you are reliant on the kernel enforcing isolation built out of a set of distinct subsystems that were not designed together and where any kernel subsystem may forget a check.

            but definitely nice given that putting something in a container is virtually free.

            You might want to run some benchmarks before deciding that it’s free. Depending on the workload, it can be as much as a 20% perf hit to run via runc Linux versus running in the root namespace with no seccomp-bpf policy or custom cgroup. For others, the overhead is close to zero. The overhead can be even worse depending on the filesystem layering mechanism that your container platform is using (some of the union-based drivers can have a huge impact on anything with a moderately large disk I/O component).

            1. 3

              they are sometimes misconfigurations in policy.

              Sure. These are increasingly rare though.

              Sometimes they’re exploits but, importantly, they’re exploits relative to a threat model that 99% of the kernel code was never written to consider

              I don’t really agree. The kernel has long had a security model of “unprivileged users should not be able to escalate privileges”. It has not had “privileged users should not be able to escalate to kernel” until much more recently.

              I don’t know what notion of jailed you want but namespaces certainly seem to fit the bill. They’re a security boundary from the kernel that applies to a namespaced process.

              Depending on the workload, it can be as much as a 20% perf hit to run via runc Linux versus running in the root namespace with no seccomp-bpf policy or custom cgroup.

              Source?

              I think the point here is that, yes, the Linux kernel is a security trashfire, but I think you are underestimating the effort to escape a sandbox. Building a reliable kernel exploit, even for an nday, can be weeks or months of work.

        2. 2

          For a lot of stuff you could use a wide range of tools. For example Nomad’s (isolated) exec driver.

          Regarding security. Running Go binaries with pledge and unveil is really easy and great.

          Usually run it with this simple rc-script then, just replacing my_binary:

          #!/bin/ksh
          
          daemon="/usr/local/bin/my_binary"
          
          . /etc/rc.d/rc.subr
          
          rc_start() {
                  ${rcexec} "${daemon} ${daemon_flags} 2>&1 | logger -t my_binary &"
          }
          
          rc_cmd $1
          

          There’s also a silly but real argument: cost. A lot of cloud providers have container billing that does finer accounting of memory and CPU usage than their IaaS (VM) services and so running a program on their container infrastructure is cheaper than running it on a rented VM.

          This is not always true though, because often these more “finer accounting” solutions have a higher price on their own, so it really depends on utilization.

      2. 3

        Likely it’s for people who are running Kubernetes so everything has to be a container.

      3. 3

        Binaries are not a deployable unit. Containers are.

        1. 6

          I sort of see your point, but I’m inclined to argue the contrary. Statically linked binaries essentially are a deployable unit. Maybe you’d argue that containers can bundle configuration, but so can binaries. Maybe you’d make some distinction about “not needing to recompile the binary to change configuration” but you still need to rebuild the container which is the more expensive part by far (for a Go app, anyway), even with a decent hit rate on the build cache–there’s no fundamental difference between compiling a binary and running a Docker build except that the latter is wayyyyy more complex and expensive (in most cases, you need a daemon running installed rather than just a compiler/toolchain).

          Containers are great for apps that can’t easily be distributed as a static binary (particularly when it would be very large, since container layers are individually addressable/cacheable) or for cases where you’re deploying a static binary in a containerized environment (e.g., your org uses k8s), but a single binary absolutely is a unit of deployment.

          1. 0

            It isn’t though. Show me the cloud service that allows me to deploy a single Linux binary.

            1. 2

              What cloud providers support isn’t useful for answering the question. One can easily imagine a cloud service that takes a static elf file and drops it into a firecracker VM—the reason this doesn’t exist (to my knowledge) is that their customers are typically deploying apps in languages that don’t have a good story for static ELF binaries (and the ones that are can just wrap their binaries in a container), not because ELF isn’t a deployment format.

        2. 2

          updating code in a lambda with a zipped binary is significantly faster than with a binary in a container.

      4. 1

        Typically containers run one app each anyway, but in my experience it’s just generally nice to have one unified format to build, distribute and run server side software. I can build an image on my windows computer, throw it onto my mac and it works with zero changes then I can push that same image up to our hosting provider and it runs there too, and we can switch hosting provider (which I have done a few times) with minimal effort. Sure, under the hood you’ve got EXE, ELF, DMG, PKG, etc on all the various operating systems but when it comes to getting production code live, containers really do make life easier!

      5. 1
        • Containers have become like universal server executables.
        • It requires less work to run a container as a Google Cloud Run instance than an executable, source tarball or a repository.
        1. 2

          Agreed, but I still think it would be cool if we orchestrators had decent support for ELF files. I’m pretty tired of creating all of these single-binary containers just because there’s no way to run binaries without taking on the full burden of managing one’s own hosts.

          1. 2

            That’s a sensible requirement. How hard could it be for the hosting providers to detect an EXE / ELF file and just wrap it inside a stereotypical container? I’d think it’s something close to a five-line shell script.

    2. 4

      This seems like a big improvement over having to install the Docker client. Are there other similar tools?

      1. 3

        Checkout https://ko.build/advanced/limitations/ they offer some alternatives.

      2. 1

        I don’t know if that’s really what you asked, but if you also don’t want to have docker where the code is deployed. Hashicorp’s Nomad supports other methods. For example they allow you to deploy an isolated binary, but also JVM and it’s even a system extensible via Plugins. So there’s a lot more options.

      3. 1

        I like using nixpkgs’ dockerTools.streamLayeredImage to build the images, which you can then pipe into docker load or whatever.

        1. 1

          I love the concept. Still haven’t had any luck packaging even simple Go apps with Nix (IIRC there was a Mac-specific bug in some of the buildGoModule stuff, but I might be misremembering). Every time I touch Nix I go down a rabbit hole and never actually accomplish the seemingly-simple thing I set out to do.

          1. 1

            Which is exactly why I bailed from it after 4-5 such sagas. Why couldn’t they just use a subset of Lua or Python? Or anything that’s easy to learn in an hour? No, it had to be a Haskell wannabe…

            Makes me cringe super hard when I remember that I used to strongly advocate for language syntax. Gods, I would slap myself to Earth’s orbit if I could go back.

            But yeah, Nix has some pretty solid ideas but it seems like they are not quite ready to wide adoption. Leaky abstractions, non-intuitive language, and gaps in the ecosystem for now prevent me to try and use it 24/7.

            1. 1

              I feel this, but Nix the language isn’t even the hardest part. It’s just like extra friction. The hard part in my experience is tracing things through nixpkgs (no types, minimal docs, not-intuitive-to-me directory structure, terse naming conventions, etc), but there’s a long tail of friction-y things like the confusing CLIs, the docs which tell you to use one thing and the community who tell you to use a different thing, blogs that post snippets that work for NixOS but not vanilla Nix on Linux or MacOS (or vice versa), etc. Strongly agree on the fundamental vision of Nix, but there’s so much practical stuff holding back the idealist academic core ideas. I am rooting for it, but from the outside it feels like it needs a culture shift or a governance shift or something so this stuff gets more of the focus (I’m occasionally told that the community is already prioritizing this stuff, but every few years when I check in things are still pretty frictious).

    3. 3

      We use ko and love it in production! Extremely nifty. Docker app in the same time it takes to make a static go binary. Lightning fast and extremely lightweight.

      1. 1

        I’m not seeing anything about building scratch containers on their website. Any idea if this is supported, and if not, why not?

        EDIT: In retrospect, I remember now that scratch containers still usually need an /etc/passwd file (if not running as root) and SSL certs, which is presumably why a base image is needed?

        1. 2

          I was doing this, but it was more trouble than it was worth as cgr.dev/chainguard/static or distroless static is a few KB and works for more workloads. See https://github.com/ko-build/ko/blob/main/docs/configuration.md#overriding-base-images

    4. 3

      I don’t see anything about podman, did anyone try with it symlinked to /bin/docker?

      1. 3

        If I understand it correctly this tool doesn’t use Docker, so that should not matter.