1. 20
  1.  

  2. 6

    Alternatively, Nix docker-tools, which produces images more minimal than Docker usually does.

    This is because BuildKit can build multiple stages in parallel.

    Sounds to me like Docker’s finally catching up with what the Nix daemon has been able to do for years. And “catching up” may be generous here. Oof.

    1. 2

      Does Nix provide a way to cache intermediate build artifacts between builds? (Basically, like .o files caching, but esp. for me in Go and Nim.) I’m a huge fan of Nix, learning & using it for some personal purposes, and even doing some local advocacy, but I haven’t found a way to do that in particular, whereas buildkit does have it. In fact I think it would require some tricks in Nix given that it resets date to timestamp 0 on all files in Nix store. I’m aware of nix-shell, though I don’t have much experience with it yet, but still I think it wouldn’t make much sense to try and use that as part of a CI pipeline (for .o reuse), as it would kinda defeat one of the main advantages of Nix (hermeticity of builds)? I’d be really interested in finding a way to get that reuse, as it would make Nix even more useful to me, speeding up some operations.

      edit: Hm, I’m starting to think it could be doable for hash-based build systems (e.g. Go) with some build hook (for saving the intermediate build artifacts), but it might require a virtual/FUSE filesystem for fetching the intermediate build artifacts when queried by go build.

      1. 1

        It’s possible to use ccache (and probably sccache) with Nix, but I’m not sure how much that would help with Nim and it wouldn’t work at all with Go.

        1. 2

          Can you show me a nix expression doing that? Does it stay self-sufficient enough to be able to be included in nixpkgs and transparently used to build parts of nixpkgs, or does such use of ccache in a nix expression require an outside service (i.e. some persistence outside the nixpkgs “build sandbox”)? I’m really interested in understanding what’s the mechanism behind what you’re suggesting!

          1. 1

            Can you show me a nix expression doing that?

            To be honest I’m quite new to Nix and haven’t gotten it working myself yet :)

            I think that the easiest way to use ccache with Nix is to replace a package’s stdenv, the Nix ccache package comes with an easy way of doing that to packages in your own overlay. A problem is that changing the stdenv would change all of your build hashes.

            You can also turn off sandboxing and set the ccache environment variables.

            or does such use of ccache in a nix expression require an outside service (i.e. some persistence outside the nixpkgs “build sandbox”)?

            Yeah, ccache requires a directory that you keep intact between runs, and sccache uses a remote daemon.

        2. 1

          I’m currently fighting with Docker go builds myself. Could you explain how BuildKit helps with caching build artifacts (or provide a link)? With a traditional multistage Dockerfile, I can’t seem to find a reasonable way to share go’s build cache across changes to the source code.

          1. 2

            You can do something like:

            RUN --mount=type=cache,id=go,target=/root/.cache-/go-build go build
            

            or something like that. See https://hub.docker.com/r/docker/dockerfile/

            The caveat is that as I understand it this won’t survive across multiple VM rebuilds, so depending on CI setup won’t help there.

      2. 4

        Anyone used Bazel’s rules_docker for building OCI and deploy? I am interested in that, but not sure if it is a popular option and how the support looks like.

        1. 3

          Currently using this where I work. We use Docker format rather than OCI but I suspect the experience would be the same- haven’t been bit by too many things and I find the performance pretty good.

          Two things I would call out:

          1. if you are pushing multiple images, that requires multiple bazel run invocations unless you build a container_bundle+push-all [0]. This can have some overhead if you do bad things to your analysis cache.
          2. (Haven’t investigated this too much yet but) there appears to be an issue with detecting in a remote repo (AWS ECR) whether an image has actually changed and choosing to always repush. Suspect it has something to do with stamping [1].

          0: https://github.com/bazelbuild/rules_docker/blob/master/contrib/push-all.bzl

          1: https://github.com/bazelbuild/rules_docker#stamping

        2. 1

          I literally switched Notion’s docker build to DOCKER_BUILDKIT=1 last night. I didn’t know about this magic # syntax=docker/dockerfile:1.2 comment - I’ll be adding that to my next PR. I’ve very happy with the build parallelism - it cuts our cold build time in half.