1. 43

This is a counter-post to the one at https://lobste.rs/s/xov7nz/what_s_your_container_less_deployment :)

I’m curious to hear people’s thoughts. What are the big reasons for avoiding containers in production systems?

FWIW, My experience has generally been positive, minus the occasional hiccups/bugs here and there. I’ve generally found the gains in productivity make up for the complexity issues that arise, most of which can be solved with some education and tooling.

(Full disclosure: I worked at a so-called “container company” for a while, and was pretty involved in this space for a while, but I don’t work there anymore.)

  1.  

  2. 22

    I think containers appeal to folks who don’t have an alternate way of including all their dependencies together in a single file, but for folks on the JVM we already have that; it’s called an uberjar. It still depends on having the JVM installed on the host, but that’s one debian package which is really easy to automate, and as long as you stick with openjdk 8 (which is backwards-compatible down to the dawn of time) you don’t have to worry about versioning.

    Basically all the selling points of docker are things we already get for free with existing tools we already know that have way fewer moving parts.

    1. 8

      I’ve often thought that deploying all one’s dependencies together is just inherently wrong, tbh. When people show a little restraint and don’t depend on unstable features all the time, it’s totally possible for lots of packages to coexist while depending on system-wide installations of things.

      1. 13

        Well, the best of both worlds is the nix model; you can depend on exact versions you know are well-tested against but have them provided by the system-wide package manager. If I didn’t already use uberjars for deployment, I would probably be a lot more interested in looking into that, but again, the salespitch addresses problems I don’t have, so … why bother?

        1. 2

          Yeah, fair enough. We use shaded jars at work too, I’m just dreaming here.

          I ought to try nix, though.

      2. 2

        I do think that there is the packaging aspect and the segmentation aspect. Uberjars take care of the packaging. How do you prevent the JVM for your app writing over the JVM for another app running on the same server? Same with which network interface will it listen on? JVM does a good job of limiting the memory it will use, but you must trust your other residents of the server. In other words, you must trust yourself from a month ago usually…

        1. 4

          In my experience it’s unusual in a JVM deployment to have a server that has more than one thing running on it. The JVM is so memory hungry that it usually makes more sense to dedicate a whole machine to it.

          1. 3

            True, at most people’s scale, it makes sense to run one app per host. Amazon will happily sell you two hosts for 10% more and your stakeholders will happily pay for that than the equivalent much higher price of human resources to run the single larger host more efficiently.

            It’s one reason why most people running things on k8s is just crazy.

      3. 18

        I’m interested to see what others say on this, but please could commenters distinguish between application containers and system containers in their replies? Application containers are like Docker: no init process, massive paradigm shift in workflow and deployment, bundle everything into a “golden image” in a build step, and so on. System containers are like lxd: pretty much the same workflow as deploying to a VM except that memory and disk space doesn’t need to get partitioned up, though you might use differing deployment techniques and tooling just as you might with non-container deployments.

        1. 20

          I think a lot of people are unaware that system containers exist thanks to the hype and popularity of docker. Having worked with both, I personally prefer the system container paradigm and specifically the LXD ecosystem. Although that’s probably because I’m already comfortable working with VMs and standard Linux servers and there’s less new stuff to learn.

          1. 9

            I wish I had gone more of the lxd system container route. I feel like I could have taken my existing ansible workflow and just applied them to containers instead of KVM virtual machines (via libvirt). I think I started going down that route for a bit, but then just ended up rewriting a lot of services I used to be in regular Docker containers. I ended up writing my own provisioning tool that communicated directly with the Docker API for creating, rebuilding and deploying containers (as well as setting up Vultr VMs or DigitalOcean droplets with Docker and making them accessible over VPNs):

            https://github.com/sumdog/bee2

            In general I tend to like containers. They let you separate out your dependencies and make it easy to try out applications if project maintainers supply Dockerfiles or official images. But there are also tons of custom made images for things that may not get regular updates, and you can have tons of security issues with all those different base layers floating around on your system. I also hate how it’s so Linux specific, as the FreeBSD docker port is super old and unmaintained. A Docker engine implementation in FreeBSD that uses jails (instead of cgroups) and ZFS under the hood would be pretty awesome (FreeBSD can already run Linux ELF binaries and has most of the system calls mapped, except for a few weird newer things like inotify et. al, which is what the old/dead Docker used).

            I have huge issues with most docker orchestration systems. They’re super complex and difficult to setup. For companies they’re fine, but for individuals, you’re wasting a lot of resources on just having master nodes. DC/OS’s web interface is awful and can eat up several MBs of bandwidth in JSON data per second!

            I’ve written a post on container orchestration systems and docker in general:

            https://penguindreams.org/blog/my-love-hate-relationship-with-docker-and-container-orchestration-systems/

            1. 4

              I don’t user containers and don’t know the difference between docker and lxd any more than what you just described.

              It’s not clear to me what application containers provide that per-application-user-ids don’t.

              System containers sound like something that may interest me. I build a lot of VMs.

            2. 15

              If anyone is interested in seeing how opinions change over time, this same question was asked two years ago.

              1. 0

                Awesome!

              2. 13

                I feel like docker is too commercial. I worry about lock-in and how this business might turn.

                I have experimented with it and it seems like a nice feature set. I am also encouraged by appc/rkt/buildah/podman but I have not yet seen a really simple tutorial (though I admit I haven’t looked very hard). It doesn’t seem to me like these have seen wide use – likely because of docker’s popularity.

                1. 2

                  Podman is often just alias docker=podman once you get it installed…. Quite the escape hatch if docker implodes.

                2. 11

                  Short version: It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

                  I have been a bit early to the containers game. I used FreeBSD jails before, I also was a dotcloud customer, so I knew early about what ended up becoming Docker, and maybe because of that I am early to also see downsides.

                  That’s not to say containers are bad in every case. However there currently is quite a bit of a hype and marketing, so not every decision to use containers is rational. That said I should probably mention that the majority of my income comes from consulting in that general area. So on the financial side I have bias pro Docker (and containers at large).

                  There are various attributes given to Docker that aren’t inherently related to Docker. There’s no way I’m gonna mention all of it, but maybe a bit historically. An aspect that got people initially very interested in Docker is simply that it provided a quick way of allowing a developer to verify a project works correctly in a server like environment and not just on the current developer machine setup. There were two more widely used approaches (outside of enterprise) to this from what I have seen. One was Docker and the other VirtualBox/Vagrant. Docker won by the setup being easier in pretty much every aspect. For quite a while many companies used Docker only on the Developer side.

                  [I use Python and node.js as a placeholder here. Replace it with anything else, really]

                  Another aspect then was that many commonly used Linux distributions do a quite bad job at providing for example an easy way to setup a Python project, especially with python2/python3, with some local libraries, some third party ones, some dependencies from additional repositories, etc. Young companies (or projects just starting out in general) often don’t have experienced sysadmins from the beginning. These were dotclouds target audience, similar in many ways to Heroku.

                  I’m talking about developers who’d try adding sudo in front of anything to see if it somehow improves stuff, or simple people who follow misleading tutorials, because they just want to set it up quickly as a prototype. When a node.js dependency has a C++ dependency of a specific version that might even need to be compiled manually one might just copy and paste some code and check if it is working, to get a prototype working and see it it’s viable. While it sometimes ends up there that doesn’t really suit a production environment, where reproducibility and recent security patches are important.

                  So through that there was another use case. In the end what it boils down to is to make (mostly) scripting languages and any dependencies into the equivalent of a static binary. Docker is pretty much optimized for this use case. Creating a generic package that can be run everywhere. In many situations this matches what for example Go or Java projects do in many cases, but with everything bundled into one single image. That includes any sort of asset files (images, javascript, etc.). All that is managed via a very simple definition language, the Dockerfile.

                  In that regard Docker is a replacement for either a build system or a static binary (or Java one might say).

                  While there’s more to Docker, I think this covers the majority of what the average developer really wants from it.

                  From the system administrator’s such a package/single binary style artifact is also nice, because there’s a lot less reason for learning the ins and outs of a language’s ecosystem simply to set up some sort of service. Also there’s a lot of tools, cloud providers, services, etc. that know about it.

                  That common format allows for projects like Kubernetes to exist. However, if one even looks a tiny bit further one sees projects, like Nomad that have the same core functionality, but can do the same things with Java’s jar files, executable binaries, etc.. All of them (optionally) isolated and using the same basic principles.

                  Another trend that is independent, but also often mentioned as a benefit of Docker is that cloud services (as well as containers) forced/pushed developers into thinking about projects and structuring projects differently. State and logic are now usually strongly separated. This is a necessity in many situations and there now are many developers that don’t really know a world where one makes use of anything the system provides. I don’t really find that good or bad. A web application (and majority of new projects is something that talks to an HTTP API) doesn’t really need to know anything about an OS. This might even be why PHP got so big. One could essentially go what we call today “serverless”, by just renting web space and a managed database. This is a similar experience to what we get with Docker, cloud functions, etc. It frees developers of thinking about what is going on.

                  However, depending on what you are trying to achieve this might not be the optimal route, in a similar way that not everybody ended up just getting webspace with PHP support and a managed database, despite having a similar use case. Sometimes one wants to essentially copy that webspace style environment on your own infrastructure. Such an approach nowadays is easier thanks to Kubernetes, Nomad and many other projects. Especially when software is designed in a modern way.

                  That said I think the main benefit of Docker itself is that common interface between users, developers and admins. The great achievement in my opinion doesn’t lie so much in the docker daemon, but in essentially creating a standard with the Dockerfile. Widely used interfaces tend to simplify things, allow for innovation and the emergence of new approaches and ways of thinking. Just like HTTP. It’s not nginx or Apache that make it great, but that it’s a common interface, something that a web application front end, a mobile app, a text editor and instant messenger, and calendar talk to.

                  However all of that especially right now causes a lot of complexity and that’s why projects and companies selling simplicity and ease of use in the area currently booming.

                  Oftentimes I see complexities moving to other parts are mistaken for them being completely removed. Many tools, projects and ecosystems in these field are still in their infancy either very young or simply changing rapidly, frequently breaking things in minor versions. This becomes especially visible once a project becomes more complex and grows beyond a demo or the prototyping phase. While this in a way is okay, the marketing side (including in non-commercial projects) frequently promises a more stable system with higher availability. In reality it doesn’t really seem to turn out higher when comparing it with a classical (“legacy”) setup, given similar investments into what is essentially infrastructure. Sometimes though it looks better on paper, because it’s spread between cloud costs (including related third party services), DevOps Engineers/SRE and development (as developers also spend time on a certain part of infrastructure, be it the Dockerfile or other integration work).

                  The downsides of these “legacy systems” often disappear if one designs things in a way they are absolutely required by containers in first place. This includes things like having an easily repeatable way to handle deployments, having configuration and state outside of your logic (for example in a database), having mechanisms in place to rapidly rebuild infrastructure, having everything in version control, etc. How hard that is from your current status depends on many factors. There’s tools for all of that and while one might say one can get that with let’s say Docker and Kubernets, you make a trade against limitations, complexities, sometimes modularity, often stability (systems have bugs, older systems that don’t constantly tend to have a lot of changes tend to be more stable overall). Whether that trade is worth it depends on the actual case. You can setup a modern, hassle-free environment with both, but you can also completely mess up both. Similar time and money investment in my personal, subjective experience leads to a similar quality of outcomes.

                  This doesn’t mean it containers never make sense. It only means that one should think the same way about containers one thinks about other tools. Do I just fall for marketing? Does it solve a problem I actually have? Are there alternative that might be better, cheaper or have other benefits? Does it solve my problem or just replace it with another? If I only have a certain use case, am I taking the right approach with my overall architecture, buying in to problems by migrating everything, when it would be safe, sane and easy to only do so to a certain degree?

                  Based on these and a lot more questions that depend on the actual goal, a lot of context and the current situations very different paths might make sense.

                  As mentioned there’s other reasons to use containers. Security is one of these. While that in my personal opinion isn’t exactly Docker’s greatest feature it is certainly a use case for containers in general. FreeBSD jails are called jails for a reason. You can lock them down a lot with relative ease, while Docker if the only person managing images is someone without any admin experience and trying to make things work somehow, might completely skip that part. It’s part of the reason why there’s many products focusing on making containers secure nowadays.

                  Of course there’s other upsides and downsides and a lot in the field is changing, problems are being solved and new ones come up, requiring for changes in design. Whether using a container makes sense and what’s the right approach is a question of circumstances. It’s not a good practice to say Google is using them. By that standard one should really have an own data center.

                  1. 11

                    My opinion is not based on hard facts, it is about how containers feel to me, and it is targeted specially docker. So this comment is in essence a personal opinion piece which you may disagree (and you may be right).

                    I really dislike containers for multiple reasons among which the chief one is that I think they are a failure of the web back-end ecosystem and most of the language systems used in for that kind of task to handle dependencies and deployment. I think that shipping a full-linux distro as a “reasonable way to guarantee that an app work as expected” should be regarded as a cautionary tale of how bad things are these days that we can’t (safely (in many cases)) ship binaries or guarantee that our source will compile in some server.

                    Be aware that I’m not against automation, or devops, or anything that makes deployment easier. I just wish that we built our web on top of less ugly ruins. If we were into the home appliance business, containers would be the same as going to Argos to buy a Dishwasher and receiving a whole new house with a dishwasher in it and hearing the shop assistant saying: “the dishwasher is contained in a tiny alpine house, probably in Switzerland or some other cloudy place. That’s so we can guarantee the appliance is plugged in and has a water supply when needed, you don’t need to setup anything, just go to that with your clothes, here are some discount vouchers for Amazon Interrail service.”.

                    I’m quite intrigued by how NixOS has been handling devops and wish to see how Guix will go. I think my future deployment approach might be something like that.

                    1. 8

                      At my company we’re experimenting with them but my experience, at least with Docker, is that it takes a nontrivial amount of work to automate and set up… when we’re already doing a nontrivial amount of work to automate and set up things. So, I don’t see much reason to add an extra layer of complexity. Our smallest unit of management is a physical machine and that’s fine.

                      We’re also building, essentially, non-internet-connected embedded devices with a lot of custom hardware, so the costs of making it work seem higher and the benefits lower. We might end up using them for dev or test environments sometimes though, we will see.

                      1. 8

                        I’m generally opposed to containerization of all forms: packaging most of an OS with every application is incredibly wasteful & allows compatibility problems to balloon longer than they ought to (by letting developers ignore them for longer, producing a greater difficulty in eventual porting). Rather than package extra stuff, it makes more sense to bite the bullet and start off with an awareness of exactly what range of versions & implementations of dependencies you will support – which has the added benefit of forcing you to justify dependencies (almost always resulting in eliminating most of them in favor of a new, often better, implementation of whatever tiny corner of a sprawling library or framework you actually need).

                        Containers are sometimes harder to justify avoiding in a business context, where technical debt isn’t factored into the balance sheet in a serious way, it’s often more important to hit deadlines or streamline scale-related operations like deployments and restarts than to make high quality maintainable code because the business might not survive long enough for maintenance to be necessary, and it’s often cheaper in the short run to buy more computing resources than to pay engineers to get rid of unnecessary resource usage of all kinds. They let operations put out fires in the short term that otherwise would require developers to get up at 3am to debug. Kicking the can down the road may not be good for employees or users or the planet, but for the business it’s often the difference between healthy profit and slow death. (IMO, this is not an argument for containers so much as it is an argument against the way we do business.)

                        There are occasionally other situations where a container makes sense. Volunteer mass-computing projects like the ArchiveTeam’s deploy as images & containers for basically the same reason that businesses do, only moreso: in order to archive all the stuff they want to, they need a very large scale deployment of code that does things of varying complexity, and the scale in combination with the volunteer nature of users means very diverse environments, some of which are guaranteed to be broken in ways that the developers can’t be expected to understand, let alone debug; furthermore, since the workers are set-and-forget, they need to be able to push new configurations to all the connected workers. Unlike a centralized service, they have no control over or information about the environment they’re running on, and they may need to do things that other sandboxes prevent.

                        That said, I see people recommend that twenty-line python scripts be deployed by end users as docker images, and it blows my mind. Like, rather than spend a half an hour making your code compatible with every conceivable platform and version, you’re gonna inflate the size of the code by a factor of a hundred thousand?

                        1. 1

                          If you already have a blessed python base image, you don’t take up any space but the diff between that and the 20 line python app. Which is usually 20 lines. Compare this to a whole xen domU for that 20 line python script and you’re actually quite a bit ahead.

                          Docker is much like a powerful chroot’d process. You have to stick all the libraries you will need in the chroot, but then you can be fairly confident that you will not have issues with that python script reading you main app’s user table. With dockor you also get memory and network isolation.

                          Anywhere I put docker, you could replace with any container technology like OpenVZ, rolling your own cgroups, or rkt.

                          1. 1

                            Sure, but why even bother with chroot? Why not just write properly portable code in the first place? Particularly with modern scripting languages, it’s quite straightforward to write portable code, and the only potential downside is that occasionally you will have to eschew shiny new language features in favor of solid old language features.

                            1. 1

                              Security is the reason you chroot. If someone gets a file read exploit in your app, they can’t read your password file or your ssh keys that can read your github repos or your db files. They can’t install their own ssh keys in a place sshd will look for them. It’s basic defensive operations.

                              1. 1

                                For a service, in a business context. Which is fine, but ultimately uninteresting. As I mentioned in my initial context, all kinds of containerization are justified by business contexts.

                                If you want to install and run a tool, then you explicitly want to be able to compose it with the other tools on your system, use it against data on your system, and so on. Containerization in that context is wasteful and necessitates extra effort – shit that ought to communicate over pipes now needs sockets. And, the first time you set up a docker container, you gotta download the whole thing & store all of it in ram (since it’s never gonna be exactly the same binaries as your workstation), so you aren’t actually saving anything through reuse until you’re running at least several overlapping containers (but probably hundreds, compared to running a regular tool locally).

                              2. 1

                                It’s a mechanical way of giving you a guarantee that you have portable code.

                                It gets you from “I think this is portable” to “I know this is portable”. Very useful when reasoning about systems

                                1. 1

                                  Containerization doesn’t make code portable. It attempts to obviate portability concerns by shipping the whole environment – which only works when interoperability with the existing environment (the core feature of portability) is irrelevant.

                                  Sure, portability is more difficult than shipping a whole environment in a sandbox. It can sometimes be impossible to use non-portable third party dependencies in a portable way. But, when those dependencies aren’t portable (or are less portable than they claim), that is a bug on their part & they should be fixed.

                                  Application isolation isn’t actually a good thing, outside of mass deployment of ‘naturally’ isolatable applications. It means composition is awkward or impossible.

                          2. 7

                            Because instances work just fine for our needs and we haven’t seen enough of a win in using them to justify the large additional overhead.

                            1. 7

                              (I am not devops, I am a software engineer) I just couldn’t get them (well, Docker) to work easily. I understand that maybe I was “using it wrong”, but here’s my viewpoint: I can, have, and do maintain and use full Linux VPSes, with all the tech in full stack web applications. PostgreSQL, MySQL, Apache, nginx, redis, memcached, git, Node.js, Ruby… etc. I can do this comfortably, make changes without too much trouble, and deploy versions without headscratching.

                              FAR from the case when I tried to do the same with Docker. I like the sales pitch of “Just make Dockerfiles and then any dev on your team, and all staging servers and production servers, can have the exact, correct setup needed to run the app! Woo hoo!” But, in practical, real-world terms, that promise wasn’t delivered to me.

                              So, until Docker (or whatever other alternative) is at least as easy as, if not easier than me managing the stacks in VPSes, me using containers isn’t going to happen.

                              1. 10

                                I can, have, and do maintain and use full Linux VPSes, with all the tech in full stack web applications. PostgreSQL, MySQL, Apache, nginx, redis, memcached, git, Node.js, Ruby… etc. I can do this comfortably, make changes without too much trouble, and deploy versions without headscratching.

                                Are you sure you don’t work in DevOps? :p

                              2. 11

                                because I can compile binaries statically

                                1. 6

                                  This is repeated a lot but almost every time I hear this the people stating this mean something completely different than ‘static linking’ which is what the statement implies (‘compile’, ‘statically’). For instance I see quite a lot of gophers state this as well yet when you look at the binaries libc and a half dozen others are linked in. I’m not stating this is good/bad - that’s a different argument. I just don’t find this to be a factually correct statement and when repeated a lot it is very misleading. If you look at most of the php or ruby interpreters in use you’ll find the situation is even dramatically different since it’s written in c, older, and has quite a few libraries that are written in c and linked in (rmagick, libxml, etc.) Furthermore most of the container shops I’ve interacted with don’t compile from source to begin with. It’s ostensibly one of the main reasons they are using containers to begin with because they seem to find it hard to compile/install the specific versions/libraries.

                                  To eliminate confusion those that repeat this should probably clarify that they are creating filesystems with known fixed versions of libraries that are in most likelihood 99% dynamically linked (the vast majority of linux binaries are) - not statically linking them.

                                  1. 2

                                    So you don’t think people who espouse “static compilation” actually use static linking?

                                    1. 1

                                      no I don’t - not in the “single binary completely statically linked” sense - especially with the interpreted languages as I mentioned as they aren’t sitting there and compiling from source and statically linking in all the different libraries to the language interpreter itself - in fact many of the interpreted languages apis that use dynamic libraries aren’t even built that way - many of them are built explicitly to dynamically load - this is easily proven by code - it’s not even really debatable in cases like this - again - this isn’t a right/wrong conversation here - it’s more of a confusion that’s be promulgated and I don’t think OP or any of the upvotes were intentionally trying to deceive - I think it’s one of the many things that dev evangelists from camp container have misspoken quite too often and then posts with titles of “why are you not using containers” appear <– this is a huge reason why

                                      1. 2

                                        I certainly wouldn’t want people to use the phrase “static compilation” or “static linking” to refer to an interpreted program… In this case though it seems quite possible that they were referring to compiled programs, statically linked.

                                        1. 1

                                          so the forest for the trees here is that if you are indeed statically linking via say go why would you need docker to facilitate that process? I think the go switches are something like “go build -tags netgo -ldflags ‘-extldflags “static”’” - as I said before, ostensibly the reason why people were using docker in the first place is “it works on my machine” and they weren’t having to guess what version of what libraries were being used not that it’s really common to link in random shared libs to go anyways - that’s not a go idiom

                                          1. 6

                                            so the forest for the trees here is that if you are indeed statically linking via say go why would you need docker to facilitate that process?

                                            i think that’s precisely the point /u/coco was making

                                        2. 1

                                          I do not really understand what you say… for me “static linking” and “static compilation” is the same, but I’m just a C programmer, and distributing standalone binaries that work everywhere usually amounts to adding “-static” to your makefile

                                          1. 4

                                            nah I was confused - caleb was kind enough to help sort things out; i hear a lot of container folks state this as a reason for using containers to begin with whereas you were stating that it’s a reason why you don’t need containers and that’s completely valid

                                            as for this direct comment - generally speaking there is a compile and a linking stage but you’re right if it’s a simple program that can accomplish both at the same time

                                            sorry for the confusion

                                      2. 1

                                        you look at the binaries libc and a half dozen others are linked in.

                                        though those libs tend to be fairly reliable and pre-installed on virtually any system, whereas other libraries that might require a separate installation step can be linked into the binary (or perhaps distributed with the executable zip, Windows style)

                                    2. 5

                                      I don’t use them just because I wasn’t using them before and I haven’t seen any reason to start. What do I get for deploying container images instead of RPMs?

                                      My bar there is probably kinda high—I like being able to debug/fix things in place when they break, which container systems seem to frustrate, and the ones I’ve seen seem to take away things I’ve used to great effect in emergencies (like iptables).

                                      1. 4

                                        I’m part of a two person internal on-prem apps team, without a supporting ops team. Most of our apps have single machine deployments and don’t need to scale beyond that. Given this context docker seems like a lot of infrastructure overhead. If I had to scale, were in the cloud, or had a supporting ops team, I’d reconsider. But at the moment ansible for VM builds, a gitlab’s CI for tests, and git checkouts with nginx/passenger are doing a bang up job with minimal complexity.

                                        One other benefit I’ve head touted is that your dev environment is more similar to your prod environment. While I appreciate the goals of that sort of consistency, rails apps come with a number of development mode features which don’t play as nicely in the docker context (e.g. anything inotify based like code reloading or livereload). With rbenv & bundler I’ve found that the environments are similar enough.

                                        1. 3

                                          I still haven’t encountered a problem for which containers could be a good answer.

                                          I’m fine building servers using Ansible. All my services work together just fine on the same OS, and if they didn’t, my answer wouldn’t be “fuck it, let’s ship multiple OSes instead”.

                                          1. 4

                                            We use them, but that’s a policy decision that exceeds my pay-grade. I am deeply skeptical, because I think that the idea, while possibly interesting, has been put into wide usage well before the actual benefits and costs can be articulated. Personally, I always felt that unikernels were a much better use of the various virtualization hardware than running 40 years of misguided Unix sludge on top another distinct 40 years of Unix sludge.

                                            1. 2

                                              In our company we have other priorities to do. It is only a nice-to-have, we passively work on it, but it is not going to get the main attention any time soon. Good deployment doesn’t get investor attention, features do.

                                              1. 2

                                                I don’t use them for deploying alpha.sneakysnake.io, because they’re not free. There’s a non-zero overhead that I see no reason to pay. It’s not really any easier to use Docker to build and deploy images than it is to use Packer to do the same.

                                                I do use docker to build Linux binaries from my MacBook pro, though.

                                                1. 2

                                                  I both do and don’t use containers, depending on the task.

                                                  Background: The majority of my work is in pyspark & deployed on Databricks. In pyspark, each host has 2 processes — a JVM where the core of Spark runs (the data plane) and a Python process that amounts to a control plane. The big exception to the “control plane” analogy is that you can write Python UDFs, in which case the Python process becomes a data plane also.

                                                  Why I use Docker:

                                                  • All our tests run in Docker to mimick the Databricks runtime as closely as possible.
                                                  • We’ve seen some very strange issues with UDFs that mostly disappear when we run inside a Linux environment.
                                                  • Native Python extensions behave a lot more consistently when using a similar OS.
                                                  • I don’t want to use Linux as my day-to-day OS

                                                  Why I don’t use Docker:

                                                  • It’s really slow.
                                                  • Poorly integrated: I like launching tests from my IDE, I could probably get it to work via Docker, but it doesn’t seem worth the effort.
                                                  • It’s not a huge deal to fix the one-off issues on my Mac that Docker seems to fix
                                                  • Huge image downloads can be infeasible if I’m on the wrong WiFi network
                                                  1. 2

                                                    I’m not using containers because my job runs everything on Windows, and 15 years of *nix has left me entirely ignorant of that side of the world. I’m learning a lot of Win sysadmin/devops as I go, but have found next to no resources for *nix people on how to do ops there.

                                                    1. 2

                                                      I’ve never really needed it as a (mostly) hobby developer, but setting that aside, I think I was initially confused by the concept, that led me to be disappointed in what it, specifically docker-like, actually does. I thought that containers are like minimal VM’s that could be described easily shared without all the dependencies. That different containers could implement interfaces that made scaling and re-placing more seamless. I that an image could be generated and then inspected while it’s running. To be fair, I have no idea where I got the ideas from, seeing that I never used it, and when I tried it once I was very confused.

                                                      1. 2

                                                        “Containers” sort of means two things:

                                                        • OS facilities for lightweight “virtualization” by namespacing all the objects (filesystem, processes, users, network interfaces etc.) — FreeBSD Jails, Solaris Zones, Linux namespaces+cgroups-kind-of-a-mess-but-it’s-flexible-af
                                                        • tools for building and running “lightweight” OS images with filesystem layers and stuff

                                                        The latter usually relies on the former, but it can use real (ha) virtualization too.

                                                        I thought that containers are like minimal VM’s that could be described easily shared without all the dependencies

                                                        Well, the container images can be shared, they’re like glorified tarballs of a filesystem root that has a whole (stripped down) OS with your app and its dependencies.

                                                        different containers could implement interfaces that made scaling and re-placing more seamless

                                                        Anything related to scaling and stuff is a layer above, with all the fancy orchestration systems…

                                                        an image could be generated and then inspected while it’s running

                                                        Well, that sounds true? What do you mean by “inspected”?

                                                        1. 1

                                                          Also to explore these concepts by working through examples I recommend checking out the diyC project.

                                                      2. 2

                                                        We use containerization for some of our infrastructure but not all.

                                                        Where we use it:

                                                        • CI build environment. It is useful to have a consistent, known set of build tools that is independent of the configuration of our CI servers and can also be run locally by developers.
                                                        • Node.js applications. We have a few Node-based microservices. Node is, in our experience, very sensitive to version changes, so deploying the application with a known Node version, and with dependencies already installed, helps keep things stable.

                                                        Where we don’t use it:

                                                        • JVM applications. Containerizing them would, I believe, just add a layer of indirection and complexity with little or no benefit. Most of the things people cite as benefits of containers have been built into the JVM or its ecosystem since before Docker existed: there are well-understood ways to consistently and reproducibly manage dependencies, the JVM already runs the same on developer workstations and production servers, and it has built-in facilities to limit memory and CPU consumption. Tool support for JVM-in-a-container is getting better but still lags behind JVM-without-a-container.
                                                        • Administrative tooling (mostly Python scripts). Being able to have admin tools directly manipulate the host filesystem without having to muck around with bind mounts keeps the code simpler and easier to reason about. For these, we do have to worry about dependency management a bit, so containerization would be helpful, but in practice we almost never need to do more than make sure pip install gets run when we make a dependency change.

                                                        We’re not philosophically opposed to containers by any means, but we view them as a tool that is sometimes the right way to solve a problem, sometimes not.

                                                        1. 2

                                                          I’m in the Java world, so, like @technomancy says, I already have a deployment mechanism in the “uber” or “fat” JAR. Not only that, but with PaaS like Heroku, I don’t even have to build the JARs locally.

                                                          For development, I do use Docker containers for things like a postgres database, mkdocs, etc., because otherwise the configurations are a pain and conflict with other things, and am also looking at Test Containers to make some of that testing with postgres easier.

                                                          1. 2

                                                            Containers and Docker for a long time seemed to add complexity, have a lot of bugs, and solve problems I’d already solved other ways, so I just never switched to containers.

                                                            I’ve gotten a long way using saltstack and rsync and deploying to Ubuntu LTS - it’s nice reliable tech and I sleep soundly at night.

                                                            1. 2

                                                              Mostly: Systems were in place already and there is no benefit in changing a working system, I’m not against it if there are real benefits.

                                                              But also: I have nothing against containers but I’m not a fan of Docker. Mostly it’s people depending on images some rando on the internet posted. We used to have better practices there and everything still seems a little wonky.

                                                              But that experience is mostly from test installs and I have to admit I’ve never worked with a real prod setup with docker or k8s, so I could still be persuaded.

                                                              I am a huge fan of containers for automated tests and build systems, I’ve been using that for years successfully.

                                                              1. 1

                                                                Note: @rtxb mentioned two different kinds of containers, but I think this applies to both kinds.

                                                                Before I explain why I don’t use containers, I want to make sure I’m on the same page with the benefits:

                                                                • Consistent environments no matter where they are deployed
                                                                • Cross platform
                                                                • Security? (this one seems dubious, and I’m not sure I’ve actually seen somebody give this as the reason they use containers)

                                                                If I am wrong, please correct me.

                                                                Now onto my reasons not to use containers. Keep in mind that this written from my perspective as a high-performance C programmer and web developer.

                                                                In my experience, the fewer layers between your program and the hardware it’s running on the more performant it will be[1].

                                                                Having more layers can not only impact processing performance, but also network latency, file I/O and memory access. In the context of containers, most of these problems can be mitigated by using a hardware based hypervisor and thin network/disk layers. However, in my experience with virtual machines on Linux they can be fiddly to setup properly[2] and you might not even get equivalent performance.

                                                                On the subject of security, I don’t see containers as a benefit because if the outside work as access to your program and they get arbitrary code execution you are still screwed.

                                                                I would appreciate any criticism of this reasoning.

                                                                [1]: I do use Electron at work, but that is because the applications I’ve built with it started life as web applications and Electron seemed like the best option. If I had to do it again, I’d probably write a Gecko-based shell.
                                                                [2]: My experiences have been with KVM on Fedora 24 or 25 for CI-esque purposes.

                                                                1. 2

                                                                  Security is one of main reason not to use containers. A trustworthy Linux distribution provides stable packages and timely security updates, OTOH container images do not, and many images ship vulnerable packages.

                                                                  Using system packages and linking to shared libraries goes a long way.

                                                                  1. 0

                                                                    A trustworthy Linux distribution

                                                                    Does that even exist (pdf slides)? Seems like you already sacrifice most trust to run a Linux distribution. Your enemies don’t even have to be nation states to get results.

                                                                    1. 1

                                                                      It exists: despite the FUD, there are plenty of Linux system running critical workloads that make them very interesting targets for a large number attackers. Payments systems and firewalls especially. While the security of the kernel is far from perfect and can be improved, most organizations are choosing Linux over alternatives and they are not going bankrupt from daily break-ins.

                                                                      1. 0

                                                                        Years back, media ran stories about so-called “APT’s” that were hackers breaching companies with regular vulnerabilities. Apparently, most of the big ones had been hit with attackers leaking data for months without their knowledge. Other sources doing surveys said vast majority of breaches go unreported. Finally, addressing “bankrupt,” most breach’s were to take trade secrets or had limited damage with no punishment by government or courts. So, that part is more of a legal issues.

                                                                        All together, these facts mean we have no way of knowing how secure Linux is or isn’t in practice due to all the hidden and unknown breaches that are in all probability still happening. All we can know for sure, proven empirically by tools like I linked, is its consistently-horrible Q.A. makes it an magnet for attackers. Better be using advanced mitigations with it or a security-focused OS like OpenBSD, Genode on a separation kernel, or INTEGRITY-178B.

                                                                2. 1

                                                                  Because I am newly the maintainer of the primary app I work on, and I have not yet decided what the architecture for SaaS (as opposed to .NET desktop) clients will be.

                                                                  1. 1

                                                                    I work primarily with persistent stores (e.g. databases). Containers are great for ephemeral systems, but when it comes to persistent stores, they’re just not a good fit.

                                                                    1. 1

                                                                      I wrote a post about this; and fully acknowledge that my issues with containers are probably a result of ignorance and I’d come to love them more with some investment. I’m talking mainly about application containers. tl;dr:

                                                                      • I don’t like having to think/debug in two file systems, networking stacks, etc. and/or the mental overhead of the Docker commands needed to communicate well between them.

                                                                      • I don’t like my software running in an environment that’s too different than my dev environment (e.g. the company I’m at tries to build slim images for faster deployments, but it means if I’m running a shell into that container I don’t even get vim or ipython or curl when I need them).

                                                                      • Similarly, the project with “slim images” gives me an unprivileged user, which is great for deployment I guess, but when I shell in, I can’t look at files in /var/log or install better development tools 😛

                                                                      • The tech stack I use where I avoid containers (Elixir) has a releases mechanism that makes them pretty easy to put/deploy on something pretty reliably without containers.

                                                                      • The project I’m not using containers in is just me, so skew between multiple developers and the production environment isn’t a bit worry.

                                                                      • I don’t trust Docker sustaining well, or keeping the best interests of developers in mind in the long run, especially after NPM’s flameout and Elastic’s big worries about sustainability after Amazon made an offering. This isn’t a major pressing concern, though.

                                                                      Again, all addressable in some respects, and containers do provide value in a number of cases, but I don’t find the tradeoff too worth it when I can avoid it.

                                                                      1. 1

                                                                        We have hugely elastic workloads. Though - that doesn’t really strictly need containers, so much as Kubernetes was a relatively popular tech when our elastic workload was spun up. Docker is fine, but it’s really more about Kubernetes.

                                                                        I didn’t have any weight in the decision - it was before my time. Given my own insight, I probably wouldn’t have chosen the same tech stack.

                                                                        That said - not sure what I would pick. The solution we have works well enough. No real need to fix something that is working well enough as an architecture. Our real problems are poorly monitored components and noisy operational alerts.

                                                                        1. 1

                                                                          Bundling dependencies together in an OS image (or in a statically linked binary) is really bad for security and for maintainability. It works only if that the organization deploying the software is also the same building it. This is not the case for the large majority of existing software.