1. 39
  1. 13

    A bit of a shallow argument against Docker, there are still a lot of unanswered questions about managing the application. How are the VMs that the go binary is deployed to managed? How is the networking configuration and service discovery managed? How do you know what the current version deployed is? How do you manage potential cgo dependencies?

    Maybe the application is simple enough that these questions don’t matter, however more complicated go services need to answer these questions. I’m not advocating for Docker specifically here, I actually use NixOS and systemd to handle a lot of these issues. I just don’t think it’s fair to say that scping a go binary is equivalent to a running a Docker image.

    1. 18

      I kind of agree, but from my experience a lot Docker users are using it for packaging Python applications + dependencies and nothing more. A service manager/systemd and Rust/Go binaries could probably cut some operations overhead.

      1. 35

        a lot Docker users are using it for packaging Python applications

        I think what happened is that pip+virtualenv/ruby+bundler/node+npm have dropped the ball so spectacularly that nowadays the idea of a single executable file you could deploy is seen as some advanced high-end feature instead of … literally the most basic thing you could ask of a compiler.

        1. 4

          Sadly, IMHO, it’s not seen as but it really is. And there is no overlapping between the heavy users of Python/Julia/R/Scala¹ and the opportunity to have a single executable file to deploy. Due to the nature of those languages, the solution is “bring everything you need with you” and it is easier to do it in a container than fiddling with the other solutions.

          ¹ yes, I am aware that you can provide a jar and use the JVM and of Scala Native (that is still under development and without multi-threading the last time I looked).

          1. 2

            Python, Ruby and Node all started when any comparable compiled language was much harder to use. People wanted ease and speed of development. Go is helping with this story, being simpler than C++ and its ilk, but those three interpreted languages are well established and people figured ways around deficiencies (i.e., Docker).

            1. 1

              Python and Ruby both had compilers years (maybe decades?) before Docker became popular, and Node has been based around a JIT compiler (V8) from the very beginning. They just happened to be compilers which ignored that use case for whatever reason.

          2. 5

            I suspect a lot of those folks are inadvertently getting a dev environment that approximates their production. Lots of developers are running MacOS or Windows locally while deploying to a Linux.

          3. 14

            I just don’t think it’s fair to say that scping a go binary is equivalent to a running a Docker image.

            It kind of is, because Docker doesn’t do service discovery, or set up your network, or do anything else really except … run a binary. Oh, and docker {push,pull}. People use tools built on top of Docker to handle all of these things. Docker is little more than a glorified chroot with cgroups and built-in curl to DockerHub.

            the application is simple enough that these questions don’t matter

            Many multi-million dollar companies run on 2 or 3 servers, or a handful. These are not “simple” in the sense that they’re simple toy things, they’re only “simple” in the sense of uncomplicated. Turns out you don’t really need to have a very complex infrastructure to do useful things.

            1. 5

              I’m not advocating that docker should be used here, I’m just saying that there are real problems that it solves. Like I mentioned, I use NixOS to solve most of these problems. For my current project have a single box with a declarative network, postgres, Caddy, and Elixir release setup and a dns entry. I’m not even saying you need this, having an Ansible or Packer setup would work too.

              If they had mentioned any of those tools I would have been fine. Just having an scp to box setup is not enough when the box dies and you need to setup another one, but can’t figure out how the firewall rules worked.

              1. 3

                You’re talking about completely different things than what this article is about. Why says they’re not using some sort of scripted solution to set up a server?

                1. 3

                  OK so based on the comments, they don’t have a database or any state. I guess the point of the article is that golang binaries are easier to run than Python applications assuming only static asset dependencies? Or that simple applications are simple to deploy?

                  I would agree with that and also that it doesn’t warrant docker. I could do something similar with Java using jpackage. However as soon as you have a cgo dependency, like go-sqlite, you might not be able to just scp the binary.

              2. 1

                I mean OK it’s a glorified chroot with cgroups, but it’s chroot with cgroups. Like great! a single command instead of N commands! Plus there’s like… tagging and stuff, so you’re able to manage it with stuff that’s familiar (built-in version control, which is better than a lot of early-stage infra).

                Even when you just have 3 servers, having an easy way to carry the config reliably for a fourth, or testing that config locally, or all the other things, is very useful. And yeah, Docker doesn’t capture everything, but its big-ball-of-mud strategy means that it’s hard to miss a lot of the common issues with trying to get a new server running IMO.

              3. 12

                Managing VMs, network config, service discovery, current version, dependencies, In most cases those are all handled outside of docker. Or that’s what I’m seeing. Are your experiences different?

                1. 1

                  Getting the current version and managing native dependencies are definitely handled by docker. If you’re on a single box and use docker compose, network config and service discovery are handled declaratively and can be replicated by any dev on the team. You don’t have to keep a doc somewhere with a port map or worry about them conflicting with something running on your local machine. If you’re on multiple boxes, then yeah those things aren’t handled though there are many tools that provide those.

                  I guess my point about the VM config is that the surface area of configuring the VM is much lower when everything is in docker. Sure it doesn’t help with actually provisioning the VM or its config, but you can reduce the configuration to setting up docker and a firewall rather than needing to install the correct version of postgres, nginx, and any other deps.

                  Edit: I’ll admit that I should have put developer setup as a point for docker over service discovery, which doesn’t make much sense.

                2. 4

                  Maybe the application is simple enough that these questions don’t matter

                  If people.l would pragmatically think what they really need to do, as in what the task at hand is, that would almost always be the case. The application is simple enough.

                  1. 2

                    One specific thing that is much easier in containers vs. VMs is initial configuration. With VMs you use stuff like cloud-init which (in my limited experience) is quite painful and hard to test. With containers it’s just “here’s an env variable” or “here’s a file that gets dropped in the right place.”

                    1. 1

                      How are the VMs that the go binary is deployed to managed?

                      That’s a question one should not forget about when using Docker as well. In the end it doesn’t change much. Either DIY (rare with Docker now for various reasons), or you use a finished solution. There is many. There is also overlaps. For example you can use Nomad, the exec driver (if you want isolation) or the raw exec (if you don’t nee) or you use the Docker one for a Docker image, or the JVM one which is great if you want to do something similar with fat JARs.

                      But Docker isn’t the thing that helps you with deployment in the end, but in most instances some separate tooling.

                    2. 7

                      We had a service, written in Go (static binary), with a systemd unit file managing startup and shutdown. It was packaged as an RPM. It just worked.

                      We had a guy who insisted we Dockerize everything. It took a very simple thing, that used existing and well-supported operating systems mechanisms, and made it way more complicated than it ever needed to be.

                      Docker definitely has its place, but it’s overkill for a lot of the situations where I’ve seen it used.

                      1. 4

                        This article feels click-baity, in the sense it tries to produce a sense of conflict where there is none - it has this tone of saying “no, we don’t need Docker and we will not use it although not everybody will agree”, and then it explains they have just a single go binary with no database and well, don’t need Docker -> I don’t see what is debatable there, why would they need it? Why even write about it?

                        1. 0

                          100% agree. They have a very specific, minimal setup - if they don’t need containers - excellent!

                          If you do need orchestration though to facilitate blue/green deployments, resource fencing, auto-scaling, network isolation and so on - you would probably be well off with containers and kubernetes.

                          So in essence - simple use-cases can get away with simple solutions. /shrug

                        2. 2

                          It seems to me that some people don’t have the following things to take into account in their development life:

                          • developers all may have different operating systems on their work machines
                          • software needs to be tested while being developed by several people concurrently in an environment that’s identical to the production systems
                          • this all needs to work reliably with minimal hacking

                          When these things combine, you will probably need Docker, or something similar.

                          If you’re one of those people, you’re in luck and/or had very skillful infrastructure planning! Not a bad idea to try to keep things that way.

                          1. 1

                            No, the reason we went with go was because golang’s packaging is so much better than node. Or java. Or C#. We get a single binary.

                            At a previous job, we had many Java servers with jar plugins that required manual scps to the OVH boxes.

                            Building was as easy as:

                            mvn package

                            Testing was as easy as scping the jar to the test box and testing functionality manually. I’ve written many unit tests and integration tests that look like they account for everything you would ever need, but that facade fades as soon as you push it to production and customers instantly find bugs you never thought possible.

                            It sure would be nice if we could spend a couple weeks building the perfect CI/CD pipeline, an elegant deployment process, along with pretty dashboards. But we have software we need to ship in order to get users in order to drive subscriptions.

                            CI/CD seems to be one of the first things I do when making a new project, open source or private. If your deployment process sucks, then it’s going to be extremely challenging to patch bugs in prod.

                            we have a workout video app – we don’t have the same scalability problems as Twitter

                            Current product I’m working on was in a very similar situation. When I started, our servers had very few users, and very few transactions that needed to be processed. Scaling this >100x in the last year has taught us a lot of things, including: you never know when you need to scale. Shit hits the fan, and then you need to upgrade a box or make your server distributed as quickly as possible.

                            1. 1

                              From what I read in the article it doesn’t seem like they would have a tough time scaling. They run a single static binary server with no server-side state. They could set up a deploy to replicate that to a massive amount of instances if needed, and put up a load-balancing proxy in front of it. This is bread-and-butter stuff, not exactly rocket science.

                            2. 1

                              This workflow very slightly bugs me and it took me a while to work out why. The one part I infinitesimally dislike is that scp app user@host: is pushing to the live server rather than updating a repository (that the live server pulls from, or CI/CD pushes to the live server).

                              That’s hardly the end of the world anyway. Aside from that I think I’d opt to use ansible or something for the rest of the config on the machine.

                              Fwiw IME worthwhile things you can do with containers without interestingly disrupting their current set up:

                              • make sure a known specific go compiler binary is being used to produce that binary
                              • make sure the tests are being run on roughly the same OS in prod and dev
                              • even if the developer is on windows or macos
                              • QA can reproduce the environment, like you could with a vagrant box ;)