Hmm. I can’t help but wonder why virtualenv + Docker is necessary in 95% of cases? Why not just install the requirements globally to the Python install… given, you’re in a container and running likely only 1 app… ?
System python packages might occasionally conflict with packages in your virtualenv (see https://hynek.me/articles/virtualenv-lives/).
For multi-stage Docker builds, where you have compiler etc. in first stage, and then copy over compiled code (Python C extensions etc.) into second stage image that doesn’t have gcc etc so gives you a smaller image. In this case, installing directly with pip means some files end up in /usr/bin, others in site-packages, so it’s hard to copy everything over. Virtualenv solves that since everything is one directory.
This would be great context to add to the top of the post!
Yeah. People asked this a lot, so going to write another article about that specifically and will then link to it from this article.
… and why not using the already great Python images: https://docs.docker.com/samples/library/python/ ?
Or on top of that, why bother activating at all? You can always just give the full path to your virtualenv python binary and it’ll know where everything else is.
As I discuss in the article, this is definitely an option. However:
The proposed solution suffers from neither problem.
Best part about this post is explaining what virtualenvs are. When I was a python programmer I was often astounded at how few programmers would bother to understand what virtualenvs are and how to use them beyond activate.
I don’t blame them. Virtualenvs are annoying to work with. It feels like more software tacked on that you don’t want to care about but have to anyway.
Contrast with something like Cargo for ergonomics. They aren’t the same thing but I always end up wanting something like that for Python.
I switched to pipenv few months ago, and it is greatly improve my python development workflow.
Anw, I don’t see the point of using virtualenv in Docker, I just installed everything globally and it works fine.
As a Python programmer, I love virtualenvs, but I hate the activation dance you need to use them because of weird quirks like this.
Eventually I found vex, a virtualenv manager that supports Bernstein chaining, which means that when you run it, it sets up the environment correctly and then launches a new shell (or other command you specify) with everything configured. Anything that happens in that shell automatically inherits the correct configuration, and when the shell exits the configuration is automatically cleaned away. You can even use vex again inside the first virtualenv to enter a second virtualenv and it all Just Works.
I wish more dependency and toolchain managers worked like this.
This autoenv for zsh is very useful for Python as well as other things.
I use it to configure these things on a per-directory, per-project basis.