1. 39
  1.  

  2. 14

    I find this… weird. Docker is a packaging mechanism, not an orchestration system. You need an actual orchestration system on top of it to work reliably. The author coming from a JVM world knows that you can’t just scp product-catalog.jar production3:/apps/pc/ and then expect stability from java -jar /apps/pc/product-catalog.jar … application servers that supervise and orchestrate the systems have existed for decades.

    Or did I misunderstand the article? Is he arguing that Docker is a bad packaging mechanism? I thought he is arguing that docker run --restart=blahlbah my-application -p 123:123 ... is not a reliable way to run applications in production. If that is what he is saying I agree with him.

    But I thought it’s fairly obvious that docker run isn’t, and hasn’t ever been, the only thing you need to do to run applications in production. It’s nowhere near stable enough to be practical or reliable. Maybe Docker (the company) likes to pretend it is, but the way I see it, you always have to bolt things like k8s/marathon/nomad on top of it.

    1. 21

      Gosh, I couldn’t make it very far into this article without skimming. It goes on and on asking the same ‘why’ but mentally answering it in the opposite direction of the quoted comments.

      Docker is easy, standard isolation. If it falls, something will replace it. We’re not going in the opposite direction.

      The article doesn’t explain to me what other ways I have of running 9 instances of an app without making a big mess of listening ports and configuration.

      Or running many different PHP apps without creating a big mess of PHP installs and PHP-FPM configs. (We still deal with hosting setups that share the same install for all apps, then want to upgrade PHP.)

      Or how to make your production setup easy to replicate (roughly) for developers who actually work on the codebase. (Perhaps on macOS or Windows, while you deploy on Linux.)

      We’re not even doing the orchestration dance yet, these are individual servers that run Docker with a bunch of shell scripts to provision the machine and manage containers.

      But even if we only use 1% of the functionality in Docker, I don’t know how to do that stuff without it. Nevermind that I’d probably have to create a Vagrantbox or something to get anyone to use it in dev. (I’ve come to dislike Vagrant, sorry to say.)

      Besides work, I privately manage a little cloud server and my own Raspberry Pi, and sure they don’t run Docker, but they don’t have these requirements. It’s fine to not use Docker in some instances. And even then, Docker can be useful as a build environment, to document / eliminate any awkward dependencies on the environment. Makes your project that much easier to pick up when you return to it months later.

      Finally, I’m sorry to say that my experiences with Ansible, Chef and Puppet have only ever been bad. It seems to me like the most fragile aspect of these tools is all the checks of what’s what in the current environment, then act on it. I’m super interested in trying NixOS sometime, because from what I gather, the model is somewhat similar to what Docker does: simply layering stuff like we’ve always done on software.

      1. 1

        For the php part it’s not that complex. Install the required versions (Debian and Ubuntu both have 5.6 through 7.2 “major” releases available side by side that’s to Ondrej Sury’s repo. Then just setup a pool per-app (which you should do anyway) and point to the apps specific Unix domain socket for php-fpm in the vhost’s proxy_fcgi config line.

        I’ve used this same setup to bring an app from php5.4 (using mod_php) up through the versions as it was tested/fixed too.

        Is there some config/system setup required? You betcha. Ops/sysadmins is part of running a site that requires more than shared hosting.

        What are you gonna do with docker, have each developer just randomly writing whatever the fuck seems like a good idea and pushing their monolithic images to prod with no ops supporting it?

        1. 12

          What are you gonna do with docker, have each developer just randomly writing whatever the fuck seems like a good idea and pushing their monolithic images to prod with no ops supporting it?

          Yes. The whole point of “DevOps”/docker is to deploy softwares certified by “Works on My Machine” certification program. This eliminates coordination time with separate Ops team.

          1. 2

            Is this sarcasm, or are you actually in favour of the definition “DevOps = Developers [trying to] do Ops” ?

            1. 7

              Descriptively, that’s what DevOps is. I am prescriptively against such DevOps, but describing what’s currently happening with docker is unrelated to whether I am in favor of it.

              1. 3

                I don’t disagree that it’s a definition used by a lot of places (whether they call it devops or not). But I believe a lot of people who wax poetic about “DevOps” don’t share this same view - they view it as Operations using ‘development’ practices: i.e. writing scripts/declarative state files/etc to have reproducible infrastructure, rather than a “bible” of manual steps to go through to setup an environment.

                I’m in favour of the approach those people like, but I’m against the term simply because it’s misleading - like “the cloud” or “server less”.

          2. 2

            I don’t understand your last point, that’s exactly what developers do all day.

            In Docker, the PHP version the app depends on is set in code. It doesn’t even take any configuration changes when the app switches to a new PHP version.

            But if there’s one gripe I have with the Docker way of things, baking everything into an image, it’s security. There are no shared libraries in any way, upgrading a dependency minor version requires baking a new image.

            I kinda wish we had a middle road, somewhere between Debian packages and Docker images.

            1. 3

              the PHP version the app depends on is set in code

              And of course we all know Docker is the only way to define dependencies for software packages.

              1. 4

                Did anyone say it was? Docker is just one of the easiest ways to define the state of the whole running environment and have it defined in a text file which you can easily review to see what has been done.

              2. 1

                You can share libraries with Docker by making services share the same Docker image. You can actually replicate Debian level of sharing by having a single Docker image.

                1. 2

                  Well, I guess this is just sharing in terms of memory usage? But what I meant with security is that I’d like if it were possible to have, for example, a single layer in the image with just OpenSSL, that you can then swap out with a newer version (with, say, a security fix.)

                  Right now, an OpenSSL upgrade means rebuilding the app. The current advantage managing your app ‘traditionally’ without Docker is that a sysadmin can do this upgrade for you. (Same with PHP patch versions, in the earlier example.)

                  1. 4

                    And this is exactly why I don’t buy into the whole “single-use” container shit show.

                    Want to use LXC/LXD for lightweight “VM’s”? Sure, I’m all for it. So long as ops can manage the infra, it’s all good.

                    Want to have developers having the last say on every detail of how an app actually runs in production? Not so much.

                    What you want is a simpler way to deploy your php app to a server and define that it needs a given version of PHP, an Apache/Nginx config, etc.

                    You could literally do all of that by just having your app packaged as a .deb, have it define dependencies on php-{fpm,moduleX,moduleY,moduleZ} and include a vhost.conf and pool.conf file. A minimal (i.e. non-debian repo quality but works for private installs) package means you’ll need maybe half a dozen files extra.

                    And then your ops/sysadmin team can upgrade openssl, or php, or apache, or redis or whatever other thing you use.

                    1. 2

                      I actually do think this is a really good idea. But what’s currently there requires a lot more polish for it to be accessible to devs and small teams.

                      Debian packaging is quite a pain (though you could probably skip a lot of standards). RPM is somewhat easier. But in both cases, the packages typically bundle default app configuration and systemd unit files, which is a model that sort of assumes things only have 1 instance.

                      You could then go the LXC route, and have an admin manage each instance in a Debian container. That’s great, but we don’t have the resources to set up and manage all of this, and I expect that is the case for quite a lot of small teams out there.

                      Maybe it’s less complicated than I think it is? If so, Docker marketing got something very right, and it’d help if there was a start-to-finish guide that explains things the other way.

                      Also remember that Docker for Mac/Windows makes stuff really accessible for devs that are not on Linux natively. Not having to actually manage your VM is a blessing, because that’s exactly my gripe with Vagrant. At some point things inside the VM get hairy, because of organic growth.

                      1. 3

                        But in both cases, the packages typically bundle default app configuration and systemd unit files, which is a model that sort of assumes things only have 1 instance.

                        In the case of the context - it is one instance. Either you build your packages with different names for different stages (e.g. acme-corp-foo-app-test, acme-corp-foo-app-staging, acme-corp-foo-app-prod) or use separate environments for test/stage/prod - either via VMs, LXC/LXD, whatever.

                        Nothing is a silver bullet, Docker included. It’s just that Docker has a marketing team with a vested interest in glossing over it’s deficiencies.

                        If you want to talk about how to use the above concept for an actual project, I’m happy to talk outside the thread.

                        1. 2

                          Also remember that Docker for Mac/Windows makes stuff really accessible for devs that are not on Linux natively. Not having to actually manage your VM is a blessing, because that’s exactly my gripe with Vagrant. At some point things inside the VM get hairy, because of organic growth.

                          This is exactly why at work we started to use Docker (and got rid of Vagrant).

                          1. 1

                            At some point things inside the VM get hairy, because of organic growth.

                            Can you define “hairy”?

                            1. 2

                              The VM becomes a second workstation, because you often SSH in to run some commands (test migrations and the like). So people install things in the VM, and change system configuration in the VM. And then people revive months old VMs, because it’s easier than vagrant up, which can take a good 20 minutes. There’s no reasoning about the state of Vagrant VMs in practice.

                              1. 3

                                So people install things in the VM, and change system configuration in the VM

                                So your problem isn’t vagrant then, but people. Either the same people are doing the same thing with Docker, or not all things are equal?

                                because it’s easier than vagrant up, which can take a good 20 minutes

                                What. 20 MINUTES? What on earth are you doing that causes it to take 20 minutes to bring up a VM and provision it?

                                There’s no reasoning about the state of Vagrant VMs in practice.

                                You know the version of the box that it’s based on, what provisioning steps are configured to run, and whether they’ve run or not.

                                Based on everything you’ve said, this sounds like blaming the guy who built a concrete wall, when your hammer and nails won’t go into it.

                                1. 1

                                  I suppose the main difference is that we don’t build images for Vagrant, but instead provision the machine from a stock Ubuntu image using Ansible. It takes a good 3 minutes just to get the VirtualBox VM up, more if you have to download the Ubuntu image. From there, it’s mostly adding repos, installing deps, creating configuration. Ansible itself is rather sluggish too.

                                  Compare that to a 15 second run to get a dev environment up in Docker, provided you have the base images available.

                                  A people problem is a real problem. It doesn’t sound like you’ve used Docker for Mac/Windows, but the tool doesn’t give you a shell in the VM. And you don’t normally shell into containers.

                                  1. 1

                                    That’s interesting that it takes you 20 minutes to get to something usable. I never had that experience back when I used VMware and VirtualBox. I can’t remember having it anyway. I decided to see what getting Ubuntu up on my box takes with the new version for comparison to your experience. I did this experiment on my backup laptop: a 1.5GHz Celeron with plenty of RAM and older HD. It’s garbage far as performance goes. Running Ubuntu 16-17 (one of them…), VirtualBox, and Ubuntu 18.04 as guest in the a 1GB VM. That is, the LiveCD of Ubuntu 18.04 that it’s booting from.

                                    1. From power on to first Ubuntu screen: 5.7 seconds.

                                    2. To get to the Try or Install screen: 1 min 47 seconds.

                                    3. Usable desktop: 4 min 26 seconds.

                                    So, it’s up in under 5 minutes on the slowest-loading method (LiveCD) on some of the slowest hardware (Celeron) you can get. That tells me you could probably get even better startup time than me if you install and provision your stuff into a VirtualBox VM that becomes a base image. You use it as read-only, snapshot it, whatever the feature was. I rarely use VirtualBox these days so can’t remember. I know fully-loaded Ubuntu boots up in about a minute on this same box with the VirtualBox adding 5.7s to get to that bootloader. Your setup should just take 1-2 minutes to boot if doing it right.

                                    1. 0

                                      It takes a good 3 minutes just to get the VirtualBox VM up

                                      What? Seriously? Are your physical machines running on spinning rust or with only 1 or 2 GB of RAM or something? That is an inordinate amount of time to boot a VM, even in the POS that is Virtualbox.

                                      but the tool doesn’t give you a shell in the VM.

                                      What, so docker attach or docker exec /bin/bash are just figments of my imagination?

                                      you don’t normally shell into containers

                                      You don’t normally just change system settings willy nilly in a pre-configured environment if you don’t know what you’re doing, but apparently you work with some people who don’t do what’s “normal”.

                                      1. 2

                                        Physical machines are whatever workstation the developer uses. Typically a Macbook Pro in our case. Up until Vagrant has SSH access to the machine, I’m not holding my breath.

                                        You’re confusing shell access to the VM with shell access to containers. The Docker commands you reference are for container access.

                                        People do regularly make changes to vhost configuration, or installed packages in VMs when testing new features, instead of changing the provisioning configuration. Again, because it takes way longer to iterate on these things with VMs. And because people do these things from a shell inside the VM, spending time there, they start customizing as well.

                                        And people do these things in Docker too, and that’s fine. But we’re way more comfortable throwing away containers than VMs, because of the difference in time. In turn, it’s become much easier to iterate on provisioning config changes.

                                        1. 2

                                          If time was a problem, sounds like the Docker developers should’ve just made VM’s faster in existing stacks. The L4Linux VM’s in Dresden’s demo loaded up about one a second on old hardware. Recently, LightVM got it down to 2.3 milliseconds on a Xen variant. Doing stuff like that also gives the fault-isolation and security assurances that only come with simple implementations which Docker-based platforms probably won’t have.

                                          Docker seems like it went backwards on those properties vs just improving speed or usability of virtualization platforms.

                                          1. 1

                                            You’re confusing shell access to the VM with shell access to containers. The Docker commands you reference are for container access.

                                            No. Your complaint is that people change configuration inside the provisioned environment. The provisioned environment with Docker isn’t a VM - that’s only there because it requires a Linux kernel to work. The provisioned environment is the container, which you’ve just said people are still fucking around with.

                                            So your complaint still boils down to “virtualbox is slow”, and I still cannot imagine what you are doing to take twenty fucking minutes to provision a machine.

                                            That’s closer to the time to build a base box from nothing than the time to bring up an instance and provision it.

                                            1. 2

                                              Look, this is getting silly. You can keep belittling every experience I’ve had, as if we’ve made these choices based on a couple of tiny bad aspects in the entire system, but that’s just not the case, and that’s not a productive discussion.

                                              I did acknowledge that in practice Docker images a lot more things, which factors into a lot of the slowness of provisioning in the Vagrant case for us. There’s just a lot more provisioning has to do compared to Docker.

                                              And while we could’ve gone another route, I doubt we would’ve been as happy, considering where we all are now as an industry. Docker gets a lot of support, and has a healthy ecosystem.

                                              I see plenty of issues with Docker, and I can grumble about it all day. The IPv6 support is terrible, the process management is limited, the Docker for Mac/Windows filesystem integrations leave a lot to be desired, the security issue I mentioned in this very thread. But it still has given us a lot more positives than negatives, in terms of developer productiveness and managing our servers.

                                              1. 1

                                                You can keep belittling every experience I’ve had Every ‘issue’ you raised boils down to ‘vagrant+virtualbox took took to long to bring up/reprovision’. At 20 minutes, that’s not normal operation, it’s a sign of a problem. Instead of fixing that, you just threw the whole lot out.

                                                This is like saying “I can’t work out why apache keeps crashing under load on Debian. Fuck it, I’m moving everything to Windows Server”.

                                                But it still has given us a lot more positives than negatives The linked article seems to debunk this myth.

                                              2. 2

                                                I have the same experience as @stephank with VirtualBox. Every time I want to restart with a clean environment, I restart with a standard Debian base box and I run my Ansible playbooks on it. This is slow because my playbooks have to reinstall everything (I try to keep a cache of the downloaded packages in a volume on the host, shared with the guest). Docker makes this a lot easier and quicker thanks to the layer mechanism. What do you suggest to keep using Vagrant and avoid the slow installation (building a custom image I guess)?

                                                1. 2

                                                  Please tell me “the same experience” isn’t 20 minutes for a machine to come up from nothing?

                                                  I’d first be looking to see how old the base box you’re using is. I’m guessing part of the process is an apt-get update && apt-get upgrade - some base boxes are woefully out of date, and are often hard-coded to use e.g. a US based mirror, which will hurt your update times if you’re elsewhere in the world.

                                                  If you have a lot of stuff to install, then yes I’d recommend making your own base-box.

                                                  What base-box are you using, out of interest? Can you share your playbooks?

                                                  1. 2

                                                    Creating a new VM with Vagrant just takes a few seconds, provided that the base box image is already available locally.

                                                    Provisioning (using Ansible in my case) is what takes time (installing all the services and dependencies required by my app). To be clear, in my case, it’s just a few minutes instead of 20 minutes, but it’s slow enough to be inconvenient.

                                                    I refresh the base box regularly, I use mirrors close to me, and I’ve already checked that apt-get update/upgrade terminates quickly.

                                                    My base box is debian/jessie64.

                                                    I install the usual stuff (nginx, Python, Go, Node, MySQL, Redis, certbot, some utils, etc.).

                                                    1. 2

                                                      Reading all yours comments, you seem deeply interested by convincing people that VMs are solving all the problems people think Docker is solving. Instead of debating endlessly on comments here, I’d be (truly) interested to read about your work-flow as a an ops and as a dev. I’ve finished my studies using Docker and never had to use VMs that much on my machines, so I’m not an expert and would be really interested to have a good article/post/… that I could learn from on the subject on how VM would be better than Docker.

                            2. 1

                              I think the point is to use something like ansible, so you put some ansible config in a git repo then you pull the repo, build the docker image, install apps, apply the config and run, all via ansible.

                            3. 2

                              How do you manage easily 3 different versions of PHP with 3 different version of MariaDB? I mean, this is something that Docker solves VERY easily.

                              1. 4

                                Maybe if your team requires 3 versions of a database and language runtime they’ve goofed…

                                1. 8

                                  It’s always amusing to have answers pointing the legacy and saying “it shouldn’t exist”. I mean, yes it’s weird, annoying but it exists now and will exists later.

                                  1. 6

                                    it exists now and will exists later.

                                    It doesn’t have to exist at all–like, literally, the cycles spent wrapping the mudballs in containers could be spent just…you know…cleaning up the mudballs.

                                    There are cases (usually involving icky third-party integrations) where maintaining multiple versions of runtimes is necessary, but outside of those it’s just plan sloppy engineering not to try and cleanup and standardize things.

                                    (And no, having the same container interface for a dozen different snowflakes is not standardization.)

                                    1. 2

                                      I see it more like, the application runs fine, the team that was working on it doesn’t exist anymore, instead of spending time to upgrade it (because I’m no java 6 developer), and I still want to benefit from bin packing, re-scheduling, … (and not only for this app, but for ALL the apps in the enterprise) I just spend time to put it in a container, and voila. I still can deploy it in several different cloud and orchestrator without asking for a team to spend time on a project that already does the job correctly.

                                      To be honest, I understand that containers are not the solution to everything, but I keep wondering why people don’t accept that it has some utility.

                                    2. 2

                                      I think the point is that there is often little cost/benefit analysis done. Is moving one’s entire infrastructure to Docker/Kubernetes less work than getting all one’s code to run against the same version of a database? I’m sure sometimes it is, but my experience is that these questions are rarely asked. There is a status-quo bias toward solutions that allow existing complexity to be maintained, even when the solutions cost more than reducing that complexity.

                                      1. 4

                                        Totally agreed, but I’m also skeptical on the reaction of always blaming containers to add complexity. From my point of view, many things that I do with containers is way easier than if I had to do it another way (I also agree that some things would be easier without them too).

                                  2. 2

                                    Debian solves three different versions of php with Ondrej’s packages (or ppa on Ubuntu).

                                    In anything but dev or the tiniest of sites you’ll have you database server on a seperate machine anyway - what possible reason is there to have three different versions of a database server on the same host for a production environment?

                                    If you need it for testing, use lx{c,d} or vms.

                                    1. 3

                                      Especially MySQL has broken apps in the past, going from 5.5 -> 5.6, or 5.6 -> 5.7. Having a single database server means having to upgrade all apps that run on top of it in sync. So in practice, we’ve been running a separate database server per version.

                                      Can’t speak for other systems, though.

                                      1. 1

                                        As you said, testing is a good example of such use case. Then why using VMs when I can bin-pack containers on 1 (or many) machine, using less resources?

                                        1. 1

                                          That still isn’t a reason to use it in prod, and it isn’t that different from using LXC/LXD style containers.

                                          1. 1

                                            Do you have rational arguments to be against Docker which is using LXC? For now I don’t see any good reason not too. It’s like saying that you don’t want to use a solution because you can use the technologies it uses underneath.

                                            1. 6

                                              It’s like saying that you don’t want to use a solution because you can use the technologies it uses underneath.

                                              That’s a reasonable position though. There are people who have good reasons to prefer git CLI to Github Desktop, MySql console to PHPMyAdmin, and so forth. Abstractions aren’t free.

                                              1. 1

                                                Exactly! But I don’t see such hatred for people using Github Desktop or PHPmyadmin. It’s not because you don’t want to use it that it doesn’t fit the usecase of someone.

                                                1. 1

                                                  As someone who usually ends up having to ‘cleanup’ or ‘fix’ things after someone has used something like a GUI git client or PHPMyAdmin, I wouldn’t use the word hatred, but I’m not particularly happy if someone I work with is using them.

                                                  1. 1

                                                    I can do interactive staging on the CLI, but I really prefer a GUI (and if I find a good one, would probably also use a GUI for rebasing before sending a pull request).

                                              2. 2

                                                If I want a lightweight machine, LXC provides that. Docker inherently is designed to run literally a single process. How many people use it that way? No, they install supervisord or whatever - at which point, what’s the fucking point?

                                                You’re creating your own ‘mini distribution’ of bullshit so you can call yourself devops. Sorry, I don’t drink the koolaid.

                                                1. 1

                                                  Your argument is purely flawed. You justify the initially of Docker by generalizing what a (narrow) subset of users is doing. Like I said, I’m ready to hear rational arguments.

                                                  1. 2

                                                    generalizing what a (narrow) subset of users is doing

                                                    I found you 34K examples in about 30 seconds: https://github.com/search?l=&q=supervisord+language%3ADockerfile&type=Code

                                                    1. 1

                                                      Hummm okay you got me on this one! Still, I really think there is some real utility for such a solution, even if yes it can be done in many other ways.

                                  3. 2

                                    While I greatly appreciate this perspective, the overall tone is challenging. I do agree strongly that we have a tendency to optimize early and we should resist that urge as much as possible. I also agree that we should have reasons to add complexity, those are great truths that are helpful to be reminded of continually.

                                    I have run applications on bare metal, run my own virtualization on bare metal, configured my own switches and routers and now run applications in the cloud with and without chef/ansible. The universal experience is that, despite working with some of the most wonderful and talented people that I could ask, we have always run into problems due to infrastructure. There is no perfect solution. I am sure I can find blog posts about why you shouldn’t use x in production because “it didn’t work for me”.

                                    While I do not have personal production experience with Docker, that is likely to change in the next 1 to 2 months. I like the overall idea of abstracting away certain configuration aspects, but I expect there will be tradeoffs - not unlike every time I have changed any piece of infrastructure.

                                    That being said, if it doesn’t work… we can change again. I don’t think I would “regret” it unless there was nothing to learn or take away from the experience.

                                    1. 1

                                      I love Docker. I really do, but here’s the thing that we’ll regret about Docker, and we don’t need a massive article to explain it.

                                      You are letting regular users do this:

                                      echo 'echo "Hello." > /etc/motd' | docker run -i -v /etc:/etc -- bash

                                      Hooray! I made a file as root! I don’t even need sudo or su any more!

                                      Here’s hoping you’re not running the Docker HTTP server locally so that I can do that to you over HTTP in a coffee shop. Wait… You’re not, right?

                                      While I’m at it, maybe I’ll flip the sticky bit on your shell or cat my system root over to another server!

                                      This is great!

                                      1. 2

                                        What, does that actually work!?

                                        1. 6

                                          No, Connecting to the docker daemon requires root so that command will fail unless you modified things so regular users can access the daemon.

                                          1. 1

                                            Obviously this assumes your user is in the docker group. I’m suggesting that - if we want to point fingers at Docker - the bigger issue is these kinds of things happening in places like development environments.

                                            I’m much more worried about someone being able to SCP a developer’s root device because they are running the Docker API w/out auth than I am worried about Docker failing in production, for instance. People who run Docker in production generally know how it works. People running Docker on workstations generally have no idea, and will happily paste commands as they’re told.

                                            1. 0

                                              and will happily paste commands as they’re told.

                                              This is not a docker specific problem. I can give you a long list of non docker commands that will do bad things if pasted blindly.

                                              1. 1

                                                The difference is specifically that it is surprising to most users I’ve worked with people on Docker projects with that when you launch a container, the process is run as your host system’s root user. Although this is obvious to someone who knows how Docker works, it’s not obvious to everyone and is dangerous.

                                                Anyway, my original point was mostly sarcastic and apparently that didn’t come across well

                                          2. 2

                                            Yeah. When you launch a container, it is running on the same kernel as the Linux host (no VM) as the root user. If you mount a directory as a volume, you essentially can access it as root. A lot of people don’t realize the permissions they are giving docker containers, and I personally believe that this is probably the most concerning issue with Docker right now.

                                            We have all of our development on managed remote servers, because running local services in Docker is super dangerous if you don’t understand how Docker works internally imo.

                                            1. 2

                                              Isn’t it one of the limitation gVisor is trying to remedy by “controlling” all syscalls?

                                              1. 1

                                                Hmm! Maybe! gVisor looks interesting but I haven’t read much on it yet :)

                                          3. 2

                                            Well, hopefully no one would do -v /etc:/etc on random containers that don’t actually need to touch the host’s /etc

                                            1. 2

                                              My concerns are 100x more about workstations than servers. So many developers are running Docker and don’t even know what it does.

                                              How many people just copy/paste docker run commands to get work done? This is a social exploit w/ Docker users, but I do believe that it’s still a valid exploit and I don’t think that invalidating it is helping anyone.

                                              1. 0

                                                Agreed, the stupidity of the user can’t really be blamed on the tool. The user can just as easily do dumb things with standard linux tools.

                                                1. 1

                                                  To be clear, I’m not blaming docker. I feel like I very clearly stated that this is going to be something that people regret about using docker. I like and use docker a ton.

                                                  This issue is rooted in how people use it, not because of docker being completely able to allow avoiding this.

                                                  If you only judge the security of your devices by the idealist situation that you assume of people, you’ll never be secure. People will continue to impress with unexpected approaches to these things - even with clear guidelines explaining to the contrary.

                                                  Being an idealist is just not a reasonable way to approach this sort of issue. To be doubly clear (yes, redundant), I am talking about what actually happens - not what happens in the imagination world of theoreticals and idealism.

                                                  I have noticed enough people with this issue on their development machines that I honestly can’t believe that this comment attracted such a level of controversy. People with a reasonable amount of access to important information have issues like this, and it’s terrifying.

                                            2. 1

                                              This really misses the most obvious argument for using Docker: because it’s easy.

                                              It’s really as simple as that. It saves me time, and I spend that time doing more useful things.

                                              1. 2

                                                The author contrasts “easy” and “simple”, agreeing that Docker may be easy but it doesn’t simplify things. (Inspired by Rich Hickey’s talk https://www.infoq.com/presentations/Simple-Made-Easy which I highly recommend).