1. 60
  1.  

  2. 19

    @zwischenzugs A script on your website (specifically, the fifth one in the <body>) is producing spammy and misleading pop-ups like the one at this link: http://www.creep.world/static/lps/u6Fs3j2D/ It also looks like your site is letting ads load arbitrary iframes.

    1. 3

      The site is owned and run by wordpress. I’ve already reported this to them, will try again.

    2. 10

      With the built-in container support in SystemD you don’t even need new tools:

      https://blog.selectel.com/systemd-containers-introduction-systemd-nspawn/

      …and with good security if you build your own containers with debootstrap instead of pulling stuff made by random strangers on docker hub.

      1. 8

        The conflict between the Docker and systemd developers is very interesting to me. Since all the Linux machines I administer already have systemd I tend to side with the Red Hat folks. If I had never really used systemd in earnest before maybe it wouldn’t be such a big deal.

        1. 5

          …and with good security if you build your own containers with debootstrap instead of pulling stuff made by random strangers on docker hub.

          I was glad to see this comment.

          I have fun playing with Docker at home but I honestly don’t understand how anyone could use Docker Hub images in production and simultaneously claim to take security even quasi-seriously. It’s like using random npm modules on your crypto currency website but with even more opaqueness. Then I see people arguing over the relative security of whether or not the container runs as root but then no discussion of far more important security issues like using Watchtower to automatically pull new images.

          I’m no security expert but the entire conversation around Docker and security seems absolutely insane.

          1. 4

            That’s the road we picked as well, after evaluating Docker for a while. We still use Docker to build and test our containers, but run them using systemd-nspawn.

            To download and extract the containers into folders from the registry, we wrote a little go tool: https://github.com/seantis/roots

            1. 2

              From your link:

              Inside these spaces, we can launch Linux-based operating systems.

              This keeps confusing me. When I first saw containers, I saw them described as light weight VM’s. Then I saw people clarifying that they are really just sandboxed Linux processes. If they are just processes, then why do containers ship with different distros like Alpine or Debian? (I assume it’s to communicate with the process in the sandbox.) Can you just run a container with a standalone executable? Is that desirable?

              EDIT

              Does anyone know of any deep dives into different container systems? Not just Docker, but a survey of various types of containers and how they differ?

              1. 4

                Containers are usually Linux processes with their own filesystem. Sandboxing can be good or very poor.

                Can you just run a container with a standalone executable? Is that desirable?

                Not desirable. An advantage of containers over VMs is in how easily the host can inspect and modify the guest filesystem.

                1. 5

                  Not desirable.

                  Minimally built containers reduce attack surface, bring down image size, serve as proof that your application builds in a sterile environment and act as a list with all runtime dependencies, which is always nice to have.

                  May I ask why isn’t it desirable?

                  1. 1

                    You can attach to a containerized process just fine from the host, if the container init code doesn’t go out of it’s way to prevent it.

                    gdb away.

                  2. 3

                    I’m not sure if it’s as deep as you’d like, but https://www.ianlewis.org/en/tag/container-runtime-series might be part of what you’re looking for.

                    1. 1

                      This looks great! Thank you for posting it.

                    2. 3

                      I saw them described as light weight VM’s.

                      This statement is false, indeed.

                      Then I saw people clarifying that they are really just sandboxed Linux processes.

                      This statement is kinda true (my experience is limited to Docker containers). Keep in mind more than one process can run on a container, as containers have their own PID namespace.

                      If they are just processes, then why do containers ship with different distros like Alpine or Debian?

                      Because containers are spun up based on a container image, which is essentially a tarball that gets extracted to the container process’ root filesystem.

                      Said filesystem contains stuff (tools, libraries, defaults) that represents a distribution, with one exception: the kernel itself, which is provided by the host machine (or a VM running on the host machine, à la Docker for Mac).

                      Can you just run a container with a standalone executable? Is that desirable?

                      Yes, see my prometheus image’s filesystem, it strictly contains the prometheus binary and a configuration file.

                      In my experience, minimising a container image’s contents is a good thing, but for some cases you may not want to. Applications written in interpreted languages (e.g. Python) are very hard to reduce down to a few files in the image, too.

                      I’ve had most success writing minimal container images (check out my GitHub profile) with packages that are either written in Go, or that have been around for a very long time and there’s some user group keeping the static building experience sane enough.

                      1. 3

                        I find the easier something is to put into a docker container, the less point there is. Go packages are the ideal example of this: building a binary requires 1 call to a toolchain which is easy to install, and the result has no library dependencies.

                      2. 2

                        They’re not just processes: they are isolated process trees.

                        Why Alpine: because the images are much smaller than others.

                        Why Debian: perhaps because reliable containers for a certain application happen to be available based on it?

                        1. 1

                          Afaik: Yes, you can and yes, it would be desirable. I think dynamically linked libraries were the reason why people started to use full distributions in containers. For a Python environment you would probably have to collect quite a few different libraries from your OS to copy into the container so that Python can run.

                          If my words are true then in the Go environment you should see containers with only the compiled binary? (I personally installed all my go projects without containers, because it’s so simple to just copy the binary around)

                          1. 3

                            If you build a pure Go project, this is true. If you use cgo, you’ll have to include the extra libraries you link to.

                            In practice, for a Go project you might want a container with a few other bits: ca-certificates for TLS, /etc/passwd and /etc/group with the root user (for “os/user”), tzdata for timezone support, and /tmp. gcr.io/distroless/static packages this up pretty well.

                            1. 1

                              You can have very minimal containers. Eg. Nix’s buildLayeredImage builds layered Docker images from a package closure. I use it to distribute some NLP software, the container only contains glibc, libstdc++, libtensorflow, and the program binaries.

                        2. 3

                          Because [Podman] doesn’t need a daemon, and uses user namespacing to simulate root in the container, there’s no need to attach to a socket with root privileges, which was a long-standing concern with Docker.

                          Wait, Docker didn’t use user namespacing? I thought that was the whole point of Linux containers.

                          1. 7

                            There are two different things called user namespaces. CLONE_NEWUSER which creates a namespace that doesn’t share user and groups IDs with the parent namespace. And the kernel configuration option CONFIG_USER_NS, which allows unprivileged user to create new namespaces.

                            Docker and the tools from the article both use user namespaces as in CLONE_NEWUSER.

                            Docker by default runs as privilegued user and can create namespaces without CONFIG_USER_NS, I’m not sure if you can run docker as an unprivilegued user because of other features, but technically it should be able to create namespaces if CONFIG_USER_NS is enabled without root.

                            For the tools described in the article, they just to create a namespace and then exec into the init process of the container. Because they are not daemons and don’t do a lot more than that, they can run unprivileged if CONFIG_USER_NS is enabled.

                            Edit: Another thing worth mentioning in my opinion is, UID and GID maps (which are required if you want to have more than one UID/GID in the container) can only be written by root, and tools like podman use two setuid binaries from shadow (newuidmap(1) and newgidmap(1)) to do that.

                            1. 1

                              It can, but for a long time it was off by default. Not sure if that’s still true.

                            2. 2

                              I thought podman used buildah for building, so I’m surprised there was a performance difference.

                              1. 1

                                Yeah, that’s right - I’m trying to figure out what my problem was now…

                              2. 2

                                The missing piece from these alternatives is the scripting API. docker-py works by talking to the daemon socket. There isn’t a complete replacement for that yet, but I think there’s something in the works which uses Varlink.

                                1. 1

                                  Nice read! What do you use docker for and did the crash happen with your new setup as well?

                                  1. 1

                                    Does anyone know about the kernel requirements for running podman (and its related software)? What’s the oldest ubuntu release that I could use it on?

                                    EDIT: nm it appears that the PPA includes trusty, bionic, xenial.