1. 10

I wanted to share this short post I wrote because the tip is extremely useful.

  1. 12

    Nice tip - I have something simliar in my bashrc:

    containme ()
    { 
        podman run -it --rm -w /work -p 3000 -v $PWD:/work $1 ${2:-bash}
    }
    

    in this case (I need node) I just would run containme node and then I’d have a shell with node installed. I use it all the time!

    1. 2

      Very nice! Thanks

      1. 1

        I’d add

        echo "serving on http://localhost:3000 ..." 
        

        for a nice command-clickable url in iTerm.

      2. 7

        Hey, that’s actually not a bad tip (I’m not 100% sure it’s worthy of its own post, but it’s definitely not worth flagging). My main concern is:

        None of the viruses in npm are able to run on my host when I do things this way.

        This is assuming a lot of the security of docker. Obviously, it’s better than running npm as the same user that has access to your SSH/Azure/AWS/GCP/PGP keys / X11 socket, but docker security isn’t 100%, and I wouldn’t rely on it fully. At the end of the day, you’re still running untrusted code; containers aren’t a panacea, and the simplest misconfiguration can render privilege escalation trivial.

        1. 3

          the simplest misconfiguration can render privilege escalation trivial.

          I’m a bit curious which configuration that’d be?

          1. 2

            not OP, but “--privileged” would do it. or many of the “--cap-add” options

            1. 1

              Not 100% sure here but lots of containers are configured to run as root, and file permissions are just on your disk right? so a container environment lets you basically take control of all mounted volumes and do whatever you want.

              This is of course only relevant to the mounted volume in that case, though.

              I think there’s also a lot of advice in dockerland which is the unfortunate intersection of easier than all alternatives yet very insecure (like most ways to connect to a github private repo from within a container involves some form of exposing your private keys).

            2. 1

              This is assuming a lot of the security of docker

              Which has IMO a good track record. Are there actually documented large scale exploits of privilege escalation from a container in this context? Or at all?

              Unless you’re doing stupid stuff I don’t think there’s a serious threat with using Docker for this use case.

            3. 3

              Congrats on writing a Dockerfile.

              A few suggestions:

              • Specify which Debian you want. Latest will change.
              • apt-get update && apt-get install -y nodejs npm
                • Doing this in three steps is inefficient and can cause problems.
              1. 6

                Even better, don’t write a Dockerfile at all. Use one of the existing official Node images which allow you to both specify what Debian and what Node version you want.

                1. 1

                  I tried this but I didn’t get a shell, it would be nice to get it working.

                  1. 4

                    Those images have node set as the CMD, which means it will open the node REPL instead of a shell. You can either do docker run -it node:16-buster-slim /bin/bash to execute bash (or another shell of your choice) instead, or you can make a Dockerfile using the node image as your FROM and add an ENTRYPOINT or CMD instead to eliminate the need to invoke the shell.

                    1. 3

                      Incidentally, to follow up as I remembered to write this, one reason that it’s common for images to use CMD in this way is that it makes it easier to use docker run as sort-of-drop-in replacements of uncontained CLI tools.

                      With an appropriate WORKDIR set, you can do stuff like

                      alias docker-node='docker run --rm -v $PWD:/pwd -it my-node-container node

                      alias docker-npm='docker run --rm -v $PWD:/pwd -it my-node-container npm

                      and you’d be able to use them just like they were node/npm commands restricted to the current directory, more or less. It wouldn’t preserve stuff like cache and config between runs, though.

                  2. 1

                    I have to agree with this. I tend toward “OS” docker images (debian and ubuntu usually) for most things because installing dependencies via apt is just too damn convenient. But for something like a node app, all of your (important) deps are coming from npm anyway so you might as well use the images that were made for this exact use case.

                  3. 3

                    what problems?

                    1. 3

                      It creates 3 layers instead of one. You can only have 127 layers in a given docker image so it’s good to combine multiple RUN statements into one where practical.

                      1. 3

                        Also the 3 layers will take unnecessary space. You can follow the docker best practices and remove the cache files and apt lists afterwards - that will ensure your container doesn’t have to carry them at all.

                      2. 2

                        Check out the apt-get section in the best practice guide: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

                    2. 2

                      Worth noting:

                      1. Once you run this you’ll have a node_modules folder on the host system, if this system is not linux you might run into issues using those modules on the host (ie: if there are os-specific binary artifacts).
                      2. Running npm install through a bind mount “-v $(pwd):/pwd” on macOS will see a significant performance hit since every file will be sent from the VM to the host. It’s usually better to keep the node_modules and any files used for the build inside the container. Then just mount an output directory for any build outputs and copy them into the output folder at the end of the build.