1. 35
  1.  

  2. 5

    Maybe I am missing something, but how is it different from having 1 Dockerfile, with multistage build? what is worth introducing a new tool to the build chain :) ?

    1. 5

      Or similarly, what does this give over docker-compose?

      I actually stopped containerising my dev environment because having to rebuild and restart a container every time I modified the source code was time-consuming and a pain, and left useless Docker images on disk which would have to be cleaned up.

      1. 4

        I actually stopped containerising my dev environment because having to rebuild and restart a container every time I modified the source code was time-consuming and a pain, and left useless Docker images on disk which would have to be cleaned up.

        I wonder what the motivation is for people to containerize their development environments. Is it to be able to specify a set of dependencies and have the same environment between machines? For getting project-specific system dependencies, I put a default.nix or shell.nix in the project’s directory (my system and home environments have practically no development tools). E.g., here is a project where I want the latest Rust stable, libtensorflow, the Tensorflow Python module:

        https://git.sr.ht/~danieldk/sticker/tree/2fec307290/default.nix

        If I really want the dependencies to be completely frozen, I just import a specific nixpkgs commit.

        I use shell.nix files with direnv so that I automatically switch into this environment when I cd into the project directory, plus an emacs package that switches the environment when I open a file in such a directory.

        There is nothing to rebuild or special to run when I want to recompile a project. I just run a regular cargo build. Also, the dependencies are in the Nix store, so most of the dependencies are shared between projects (it’s space efficient).

        1. 1

          I wonder what the motivation is for people to containerize their development environments. Is it to be able to specify a set of dependencies and have the same environment between machines?

          Essentially, yes, and to make sure anyone working on the project can spin it up easily and work the same way. Also if your service is containerised in production it can help to test it works that while during development, although CI should handle that.

          1. 1

            For someone who has never used nix, what should I do to get going with a setup like that? Can it run on macos? I use homebrew; can it coexist with nix? Can I use homebrew for systemwide things and nix for development environments? I know of direnv; what’s the emacs package?

            1. 1

              I also use Nix on macOS (besides NixOS). On the Mac I have it installed side by side with Home brew, which generally works ok. On macOS people generally disable sandboxing, since not all derivations build with sandboxing. This has the risk of introducing stray dependencies in libraries installed with Homebrew. Another problem is that derivations break more often on macOS. So you generally have to fix more.

              That said, people use Nix on macos. A good starting point is John Wiegley’s presentation on using Nix on macOS :

              https://youtu.be/G9yiJ7d5LeI

              On NixOS or even Linux with Nix things are much simpler. Install Nix, configure direnv, create a shell.nix, and that is pretty much it.

              1. 1

                Thanks, I’ll give it a try!

                1. 1

                  I think I’ve got things working pretty well in the terminal. What is the emacs package you are using to switch in and out of nix environments in emacs?

                  UPDATE: Never mind, I found this which seems to work.

                  1. 2

                    Sorry for the late reaction, I was traveling. Yes, that’s the emacs package that I am using:

                    https://github.com/danieldk/nix-home/blob/72c4b0f606aeeae5db3c8606ac0f856d9d01c776/cfg/emacs.nix#L13

            2. 2

              because having to rebuild and restart a container every time I modified the source code was time-consuming and a pain

              How about mounting your code via a volume in the container, so docker takes care of the sync? That’s what I do

              1. 2

                That works “OK” for cases where you have something that hot-reloads on file change (e.g. a Flask app in dev mode) but for compiled Go or Rust programs this doesn’t work.

                1. 1

                  It does. During development I have a standard golang image with the code mounted to the proper gopath and gin as the command.

                  1. 1

                    Gin is relatively unique in supporting hot reload. This also implies only work on on HTTP services.

                    Personally I don’t like Gin for HTTP services (I use Chi or Gorilla), but I also work on other things. It’s far easier to not use Docker for almost everything else.

                    1. 1

                      I only have this set up so the container can live amongst half a dozen other containers on this particular docker-compose setup. Not all of my team need to tinker with the Go part and I want docker-compose up to be the only required command to run the project.

                      I agree that using docker when writing go is not terribly useful, but it can be done.

                2. 1

                  I used to do this, but it’s a huge pain in the neck for repl-based flows; when your editor says “reload this file” the containerized repl server is like “that’s not a file” so you have to translate the paths from host versions to containerized versions. I found for what I was doing there was a lot of friction for zero benefit, so I stopped.

                  I can see how it’d be useful if the stack you used was really finnicky to install, but for Clojure development it makes no sense. (We still use docker to automate starting background services like queues and databases.)

                3. 2

                  Yea, and you can’t use tooling like the debuggers in most IDEs without adding a lot of extra steps or work or special plugins (I mean you should just be able to attach to the process, but it’s run in a different cgroup with different privileges).

                  Containers make sense if you’re working with a set of services. I don’t even fuck with docker-compose. It’s easier just to have each docker create/start/run command in a .sh file and just start the services you need and stop the one you’re currently working on .. point the others to it and run that one locally/natively (If you’re using Docker for Mac or Windows, you need to VPN into the docker subnet .. which is so dumb .. just develop on Linux. More companies need to allow devs on Linux or even help Linux be a first-class/supported environment).

                  1. 1

                    Its interesting to see this attitude for development in dockerized environment. I wonder is this solvable via some better tooling or tech is simply overly complex for such use case.

                    I guess its similar to what was once Vagrant problems. Looks like Guix did this like a champion - solve isolated development once and for all, on the OS level, which means you can add stuff to environment that are out of scope for all other tools.

                  2. 2

                    That was my thought as well: this seems to be a redo of the docker-compose concept.

                    FWIW I have used / recommend docker-compose for (1) integration testing and (2) providing a spec of a database my local app can connect to.

                    Use case 1 sets up a database, sets up the app, then runs tests against the app, which then, at the end, can all be torn down in a hermetic way.

                    Use case 2 is simply the first part of use case 1, but exposing the port for my fine app to do on my laptop.

                    arguably these days use case 1 can be done via minikube, but that just adds a few more turtles to the stack.

                    1. 1

                      I’m curious. Putting code-reloading aside, did you notice any benefits in your development workflow compared to what you had prior to containerising everything?

                      1. 1

                        No, none. The only reason I’d started was because of a project I’d inherited. I tried to go along with it but it frustrated me at every turn. The only part of the stack I kept in a container was a Postgres instance.

                        As @djusumdog says, it causes problems with debuggers, etc. Containers are alright for production or staging but it impedes development workflow for compiled programs.

                        1. 1

                          I doubt benefits are visible on single projects, its probably way worst then having everything locally.

                          Benefits certainly come when working on multiple projects, where those reuse some of the stuff of other projects. This always complicates stuff locally.

                    2. 2

                      https://github.com/lando/lando I use this alot for my projects. In general really useful for getting our diverse team onboarded quickly

                      Not without it’s own pain points.

                      1. 2

                        https://sail.dev is another take on the same problem.

                        1. 2

                          I think from the docs the idea here is that you can mix/match ‘local’ steps with in container steps? Or something?

                          I also don’t get the draw over stock Docker stuff.

                          I suspect this may be a documentation / marketing problem. Writing such a system isn’t exactly trivial.

                          1. 1

                            I typically use Invoke-Build in similar fashion. I don’t see ATM any benefit of using toast instead ( apart from ib requireing PowerShell but toast also seems not x-platform and still very fresh).

                            This also looks like subset of features for gitlab-runner (which can be executed locally via .gitlab-ci.yaml).

                            Anyway, I appreciate the alternative to mentioned tools and I hope the development will continue.

                            Thx OP.