1. 15

  2. 5

    Based on this writing, it seems that we are yet again separating dev from prod. Use ubuntu/debian base for dev, but build special for production.

    I thought one of the main points of Docker was being able to run the same container in production. Seems that’s still not going to happen with Docker either. Dev just has to run long enough to make the next commit, and needs gobs of debug built-in. Prod has to run forever and be secure.

    Seems the only upside you really get with the Docker workflow is similar tooling between dev and production.

    1. 2

      From my experience, the difference from dev/prod is not the biggest issue, as long as you have the same images for testing/staging and production.

      Some teams do not even use Docker images for development and that’s not a big issue as long as you have good CI (at least that’s been a very long time we didn’t have the “that’s work on testing and not in production”.

      1. 1

        Use ubuntu/debian base for dev, but build special for production.

        You can use the same images for development/testing, though? You might install a few extra packages into your dev environment (gdb, …) with the same base Dockerfile.

        1. 1

          If testing becomes production, I think the goal would be having production and testing IDENTICAL, or as identical as you can make them. Otherwise what’s the point?

        2. 1

          You’re right, you should strive to keep containers immutable. Having two Docker images for the same code defeats the benefit of having a CI pipeline with promotion across environments. The article doesn’t shed much light on what’s best practice when it comes to packaging applications for dev/prod. But the author seems to suggest that there’re better ways to debug containers than attaching to it. I suspect he’s referring to health checks for readiness & liveness and a proper logging library to record logs. Also, it’s generally slower and more tedious developing an application within a Docker container. Usually, it’s much easier to work locally on the code and then let CI package the immutable container. The Docker image is akin to a jar or a deb file. You don’t build those differently for dev or prod.

          1. 1

            I would think monitoring, metrics and logging would be the way to debug production in most cases. In general you just want the starting inputs and the output errors, so you can replicate the issue in dev to fix. If you can’t replicate it, then you have to break out dtrace and friends and get serious, which is super annoying.

            Well you might build your jar or deb file differently, you might strip out debugging symbols in production, it’s pretty common actually.

            I agree developing INSIDE a docker container is way annoying. I think the dev answer for Docker is to run all the extra crap your code depends on in development. I.e. my code needs Redis, a PG instance, etc to to work right, so I’d run Redis and PG in Docker for dev, but still do main code locally, if possible. Harder to do if you are writing *nix apps on Windows for instance, but :)

        3. 2

          I’ve done some similar things to speed up my builds and deploys, but for codebases that rely on other system runtime dependencies (libpq, BLAS, etc) I haven’t found a great solution yet outside of using Alpine.

          Does anyone else have similar issues?