1. 5

    I wish someone would write a guide answering two question: where the docker solution is absolutely the perfect fit and where docker is complete overkill. Right now, as a bystander, it’s hard to tell if we’re looking at a brilliant solution for everybody’s woes or if the ecosystem is simply too frothy with enthusiasm for the tool, with very little actual pragmatic upside to it.

    1. 2

      Given how half-baked and insecure Docker is, I think many would argue it’s “never” for both. However, Docker seems to be synonymous with containers/jails these days, sadly.

    1. 4

      I’m probably missing something here, but don’t we already get somewhat comprehensive type inference in a language like Haskell? It seems to do a reasonably decent job at picking the types for you as long as you don’t encounter ambiguity. Maybe Crystal has a more take on that?

      1. 3

        Haskell has no subtyping and in a sense much easier to do type inference. If you lift Haskell type inference algorithm and try to apply it to most other languages which have subtyping, it won’t work without modifications.

        1. 2

          I accidentally the word “effective”.

        1. 4

          Is there anything along these lines for google docs? That’d be amazing.

          1. 2

            For those of us just joining in, is there a TLDR somewhere on why systemd is bad and I should hate it?

            1. 7

              The article and thread about it that I actually liked was https://lobste.rs/s/vzjalp/why_i_dislike_systemd

              1. 8

                Something about pid 1 and binary logging and udev. And pulseaudio.

                1. 8

                  It’s about ethics in init processes.

                  1. 7

                    Don’t forget “the UNIX philosophy”.

                  1. 1

                    Was just reading Brendan Gregg’s book on cloud perf the other day, it’s a great resource. Awesome to these these guys keep sharing their learnings and best practices with the rest of the industry.

                    1. 1

                      Seems surreal that this would be at all common. Candidates talk about how they were treated during an interview, and it can seriously impact your reputation. You want them to leave with a great impression of the company and its staff, even when they’re not deemed the right person for the position. You know you’ve done alright when the candidate is rejected but still sends friends over to interview with you afterwards. Frustrating how little EQ some folks in the industry seem to demonstrate and show shortsighted they can be.

                      1. 10

                        Very interesting. I also backed the typed clojure project back in the day. We made a similar judgment call a couple of years ago when figuring out where to go from Clojure, which was giving us a bunch of headaches, despite being a very fun language to program in. We were already successfully using Schema at the time, and core.typed didn’t feel sufficient for our needs. A year and a half later we’re all pretty happy with our decision to fully transition to the Haskell master race.

                        1. 23

                          Yo, the use of “master race” here is extremely offensive. Please consider your words more carefully.

                          1. 7

                            Using “master race” like this is deliberately tongue-in-cheek. It may have started with the PC master race.

                            1. 9

                              Yes, that’s the problem. It is not okay to use a term meant to justify mass murder in a way that makes it seem trivial and unimportant.

                              1. 6

                                Interesting to see some history on Wikipedia. I’d only heard it on Reddit, which hosts a big white supremacy community, so I didn’t realize it was supposed to be tongue-in-cheek. I guess that’s the problem with “ironic racism”: if the listener doesn’t already expect otherwise from you, you just look racist.

                                1. 6

                                  a big white supremacy community,

                                  For more context on this, https://www.splcenter.org/hatewatch/2015/03/11/most-violently-racist-internet-content-isnt-stormfront-or-vnn-anymore

                                  Reddit has since banned some of these subreddits, but the users still stick around, and regularly ‘market’ themselves on the defaults, coordinated over IRC.

                              2. 1

                                When you say ‘here’ are you meaning where you are, or on this particular website?

                                I have checked with a few people nearby at my University’s CS deptartment (who know about the “PC Master Race” thing - that I had never heard of), they do not see it as anything warranting offense because it is clear it has nothing to really do with white supremacy.

                                1. 2

                                  I read it as “here in your post”.

                                  Also, this thread is where I learned anyone thinks “PC Master Race” iss not a racist thing. Given where it was popularized and the fact that, uh, I don’t know how better to say it, that it’s a callback to awful super-racist evil shit, it’s not at all clear to me that it has nothing to do with white supremacy. Seriously. It’s just fucked up.

                                  1. 1

                                    I don’t like the phrase, and find it pretty weird, but I hadn’t considered its main use overtly racist. Closer to a tone-deaf quasi-political reference, playing on the perception that PC gamers believe their platform to be “superior”, with an absurd political analogy. Maybe along the lines of the Stalin scheme compiler (tagline: “Stalin brutally optimizes”), which I don’t think was written by an actual Stalinist.

                                    1. 1

                                      I agree that it can easily “callback to awful super-racist evil shit”, but isn’t that like saying that all uses of the the swastika do the same? I realize the comparison is not perfect, the swastika has had a long history of use and continued use - despite its very evil use in “the Western world”, while the ‘master race’ thing has always been racist - just not always genocidal.

                                      I personally don’t find it offensive, but I also wouldn’t use it either.

                                      1. 2

                                        I happened to notice this Reddit thread from last year get recirculated recently. The author, a highschool student, talks about suddenly being disabused of the notion that everyone interprets this phrase as innocent.

                                        In general, when someone doesn’t see why particular language is offensive, that is a pretty clear indication they aren’t in the group it affects.

                                        1. 1

                                          That is a very interesting thread and I can appreciate what it says.

                                        2. 1

                                          The scenario you reference is of a symbol of good luck and auspicious things that’s been tainted by evil abuse. I don’t think that’s the history of the term “master race” at all, no.

                                          1. 2

                                            I was going to write more, but at this point I think I have not only killed the horse, but shot it a few more times and am obligated to apologise to the horse.

                                            I appreciate the civil discussion here, and further apologise for dragging this discussion further than was really warranted.

                                1. 2

                                  Clojure / Haskell a few years ago were a great cure for being a StackOverflow developer. You had to really grok the basics to get anything done, there were simply no snippets anywhere to be found. Coming from Java / Ruby / bash it’s an absolute culture shock. You take a big hit short-term, but it really forces you to understand what you’re doing, and so you end up being a lot more knowledgeable long term.

                                  I do believe that snippets are extremely useful though, so you want a good mix of the two. There’s a reason why we have cookbooks and patterns etc. Luckily those are coming together now that the two communities above are seeing more attention and more focus on newcomers.

                                  1. 2

                                    X years ago when there was just the pickaxe book Ruby was somewhat similar. Same with Java in school when I started in 1997 (early adopter uni, I dropped the honor’s class and went for the basic C class).

                                    I think stackoverflow is awesome, but I don’t have any karma even to upvote things there. I wish I could sometimes just because of the best answer not always being the best on there. I guess I’d need to answer or ask stuff, but a) I think I can’t answer stuff because of lack of karma and b) why would I ever ask a question on stack overflow…

                                  1. 12

                                    As far as I can tell the backend / infrastructure world is nowadays experiencing almost as much of a Cambrian explosion as the javascript universe. Nuts. Really hard to tell what’s a trend / hype / fashion and what’s objectively adding value to the system.

                                    1. 6

                                      Yeah. But it’s an exciting time! In five years it’ll shake out. :)

                                    1. 2

                                      Been recently dealing with adding logical replication to our deployment with Slony. Already have native archive + streaming replication up, that was fairly painless. Slony works and seems to have been used extensively in production by many organizations, but it’s quite hard to find answers and examples for. Got at best a dozen people on its IRC channel. It’s mostly one big manual, and that’s it. Would be neat to have logical replication baked into PG itself to avoid having to stare into the abyss of these third party tools that seem to be used by a very tiny niche.

                                      1. 7

                                        What do folks generally do as far as monitoring, alerting and logging in the brave new containers world? Do you have to now monitor both the host machine and the container insides? How do you deal with the added complexity?

                                        1. 5

                                          It’s generally a good idea to err on the side of over-monitoring, as you never know what random question may be useful to answer while handling an incident. Monitor your host system metrics, monitor your mesos metrics (https://github.com/rayrod2030/collectd-mesos is a dead simple metric collection example for this), and monitor your workloads. This is existing best practice, and there are many existing tools.

                                          When I was a SRE at Tumblr I gained a lot of respect for host-local metric and log aggregators that forward to clusters and buffer to disk when downstream failures occur - these are super useful in the context of Mesos as well. This way your tasks can hit a local endpoint, and the local aggregator worries about remote failure handling or downstream reconfiguration in a uniform way.

                                          One thing I’ll warn about in a more dynamic environment is that you should test anything that you rely on to re-resolve DNS. The JVM, for instance, caches indefinitely unless you explicitly tell it not to on initialization. While DNS is a nice universally half-implemented solution, a ton of stuff will fail to re-resolve during timeouts or connection failures at any threshold.

                                          1. 3

                                            We use one monitoring system for Mesos itself (soon we’ll be open-sourcing it!), and have applications & containers self-report metrics & alerts to hosted instances of Riemann. Essentially, this even allows us to split the ownership responsibility of the applications on the cluster vs. the cluster infrastructure.

                                          1. 3

                                            I’m a big fan of Practical Vim from Prag Prog, has a lot of tips like these and more: https://pragprog.com/book/dnvim/practical-vim

                                            1. 4

                                              The author of this series also did this interesting conference talk on the same subject: https://vimeo.com/97507575

                                              But has anyone here seen more about this “designing with types” approach to programming? (In any language.)

                                              It sounds very enticing, but I have no experience with it, so can’t tell if this is much pragmatically much different than just using OO classes as types.

                                              Thank you.

                                              1. 6

                                                For one example in Haskell of designing with and thinking in terms of types: http://bitemyapp.com/posts/2014-11-19-update-map-in-haskell.html

                                                You can’t really model domains properly without a type system that is expressive. This stuff goes a long way to making ordinary code a lot more palatable and maintainable.

                                                I have some very incomplete notes here: https://github.com/bitemyapp/presentations/blob/master/modeling_data_in_haskell/modeling_data.md

                                                Here’s a fun example, never write an evaluator/fold for a datatype yourself ever again: http://hackage.haskell.org/package/catamorphism

                                                The epiphany here is that functions that “consume” or break down structure can be generated from the definition of the structure. You define the domain - it generates the code. Wrap according to convenience.

                                                For “make illegal states unrepresentable”, I have a series of examples here: http://bitemyapp.com/posts/2014-11-18-strong-types-and-testing.html

                                                For a simpler'ish example of deferred definition of data (parameterizing), see: http://bitemyapp.com/posts/2014-04-11-aeson-and-user-created-types.html

                                                Pulling out types so you can specify sub-components: http://bitemyapp.com/posts/2014-04-05-grokking-sums-and-constructors.html

                                                Parsing JSON that can vary in structure: http://bitemyapp.com/posts/2014-04-17-parsing-nondeterministic-data-with-aeson-and-sum-types.html

                                                If you don’t mind some humor, this is a post translating a mini-project example that was originally in Python: http://bitemyapp.com/posts/2014-12-03-why-are-types-useful.html

                                                I also wrote a HowIStart article for setting up a small Cabal project to parse some CSV data in Haskell: https://howistart.org/posts/haskell/1

                                                I’m working on a book to make Haskell approachable for working programmers and non-programmers (we test the book with people that have never programmed before) at: http://haskellbook.com/

                                                Hoping the book will, if nothing else, dispel the idea that we need to use less powerful languages that seem more familiar because learning materials haven’t caught up.

                                                1. 2

                                                  Solid post, great links.

                                                2. 5

                                                  I’ve been moving this direction with Ruby. It’s been partly inspired by learning Haskell and partly by trying to decompose and organize my objects. As the author says:

                                                  Many of the suggestions are also feasable in C# or Java, but the lightweight nature of F# types means that it is much more likely that we will do this kind of refactoring.

                                                  There’s also a resistance to it in what’s considered “idiomatic”. It sounds small and silly, but I think it may be the one-class-per-file rule. When I want a User’s email_address attribute to be an EmailAddress, even before I make it immutable and add validations, I have a file named email_address.rb I have to go look at. That inconveniences everyone a little and there’s not much code there.

                                                  And then, worse, to match the basic guarantees of the F# example in part 2 with immutability and validity, off the top of my head it’s going to look something like this:

                                                  class InvalidEmailAddress < ArgumentError ; end
                                                  
                                                  class EmailAddress
                                                    include Adamantium
                                                    include Forwardable
                                                    def_delegators :@address, :length, :to_s, :to_str, :<=>
                                                    include Equalizer.new(:address)
                                                  
                                                    attr_reader :address
                                                  
                                                    def initialize a
                                                      @address = a
                                                      raise InvalidEmailAddress unless valid?
                                                    end
                                                  
                                                    def valid?
                                                      address =~ /\A\S+@\S+\.\S+\z/
                                                    end
                                                  end
                                                  

                                                  And that’s a simple type with one field. I could metaprogram away almost all of this (I’ve seen a lot of one-field types with this shape now), but it already looks unfamiliar to Ruby developers and depends on two gems. It’s teetering on the edge of idiomatic.

                                                  There’s a lot of value to be had, though. The pieces of the system are more reliable. There’s now one explicit place to say what it means for an email address to be valid or how to work with it - I really like email address as an example because in most apps I’ll see it on many models, that knowledge is just repeated and smeared across the system. It’s so common that devs have a hard time recognizing it until it grows out of control.

                                                  For a slightly larger example, check out AuthenticationResponse from my TwoFactorAuth gem. The gem authenticates users by sending a challenge AuthenticationRequest to the client and confirming that the AuthenticationResponse can be decomposed and its signature verified against the AuthenticationRequest. Most of the U2F libraries do this by passing around strings for the fixed-length records; TwoFactorAuth decomposes them into types (implemented as Ruby classes) that each know how to validate themselves.

                                                  I bottom out in the AuthenticationResponse by not making types for the fields (bitfield, user_presence, counter, signature), but those would’ve been too much of the above boilerplate for a very small benefit in the one or two places they’re used. This librarby has the benefit of being nearly feature-complete and “done” to the spec that app code will never have. There, the decision that it’s not worth it now usually means difficult-to-recognize pain as the app grows and changes and becomes “primitive obsessed”.

                                                  So to address your question of whether it’s pragmatically different: absolutely. It’s more up-front code + training up-front for a harder-to-express long-term benefit. In functional languages like F# and Haskell this is the default and enables a lot of compiler-assisted refactoring. I’m not just saying “well, it’s harder with OO classes”, it’s unidiomatic and I see those human considerations looming larger than technical considerations.

                                                  1. 1

                                                    Wow. Awesome. Thanks Peter. Great thoughts.

                                                    1. 2

                                                      I have been experimenting and obsessing about this kind of hybridization of FP and OO for a year, so you gave me a great setup line. Next, ask me who’s on first. :)

                                                  2. 4

                                                    I’d say pragmatically it is far, far different from using OO classes as types. Theoretically, it’s possible to do it with OO classes, but there are a number of reasons why they are both too heavyweight semantically and syntactically to make it likely.

                                                    The core typed FP languages today have this concept in spades. Learn OCaml, Haskell, F#, the functional parts of Scala and you will see it literally everywhere.

                                                    1. 2

                                                      I don’t have any experience with F#, but I know that this is a common pattern in C++/Java, make everything private, restrict access through Get/Set functions, the Set functions can do validation and fix or reject data too.
                                                      I only read the first 3 o 4 posts in the series and then got bored, so it is highly possible that something tricker and more interesting happens later.

                                                    1. 1

                                                      There is also the stackage stuff for confirmed cabal builds, which seems to be addressing some cabal issues. All said, I’m just a spectator in the Haskell world. It doesn’t touch uninstall at all afaik: http://www.yesodweb.com/blog/2015/04/announcing-stackage-update

                                                      1. 4

                                                        The stackage set of tools is absolutely brain-dead to use and relieves most of the problems mentioned here.

                                                      1. 2

                                                        Very similar experience here as well. Just recently wrote a post about that for the Commercial Haskell group: http://www.kurilin.net/post/117369543198/haskell-at-front-row

                                                        1. 11

                                                          I think it would be better if companies contributed cash to the open source projects they use. Case in point OpenSSL.

                                                          1. 3

                                                            Would be cool if companies could be charged monthly a certain amount of choice with various dials for how much of the total they want to donate to each individual OSS project they really care about. I’m thinking Humble Indie Bundle, but recurring monthly. E.g. would be cool if someone could work on your favorite framework full-time thanks to this monthly bill.

                                                            You could lobby for certain features with more donations… oh wait…

                                                            1. 5

                                                              Some of us in the Ruby world have come up with something related, though not the same: https://rubytogether.org/

                                                              1. 3

                                                                ruby.berlin does offer such services for Germany.

                                                              2. 1

                                                                Cash can be an issue. It adds bookkeeping to the project and many people don’t want to set up a personal business for that.

                                                              1. 8

                                                                I used to be careful about returning specific HTTP codes, 404 for resource not found, 401 not authorized, etc. but I found out that there’s no point in doing so.

                                                                As far as the client is concerned, the logic is usually “if 200 - great, else show error message”. Almost no client is going to carefully check what the HTTP code is and do something special for it.

                                                                It’s much better to return clear, verbose error messages (or well documented error codes) so that whoever is working with the API knows exactly what the problem is, when there is a problem. So for me it’s just either 200 (OK) or 400 (error).

                                                                1. 8

                                                                  And those that do want to do something specific generally need an application-specific error code with more detail than HTTP could provide.

                                                                  I do think it’s worth distinguishing between client failure and server failure though, they require different handling (client should retry server errors but not client errors). So I’d say it’s 2xx (OK), 4xx (You messed up your request), or 5xx (I messed up my processing).

                                                                  1. 5

                                                                    And those that do want to do something specific generally need an application-specific error code with more detail than HTTP could provide.

                                                                    RFC 7231 lets us extend status codes. I try to do this when possible for application-specific responses. So 451 may mean you sent foo, bar but forgot baz and that’s not allowed. And 550 might signal that downstream services aren’t reachable so the service you are calling is dead.

                                                                    I like doing this because it seems to make error handling a bit easier: I don’t have to worry about different serializations for errors depending on which content you negotiated, and I can leave handling the error up to the client - maybe you can address 451, but you can’t address 452-455, so you handle 451 and let 452-455 = 4xx.

                                                                    I’ve also had better luck with putting these things behind proxies and load balancers. I’d hate to go with the Facebook-style “200 everything!” in those cases.

                                                                    I do think it’s worth distinguishing between client failure and server failure though

                                                                    Agreed! If you do nothing else, do this!

                                                                    I once called out a programmer whose code amounted to “if 200; ok; else if 404; bad”. He insisted that was sufficient because the web service he called could never return anything but a 404 or a 200!

                                                                    pkill unicorn on dev gave him valuable insight into the ops-side of his code.

                                                                  2. 2

                                                                    This was the approached I picked up from someone when designing our APIs back in they day and it stuck around. Generally I have an accompanying error code for my 400s to specify what exactly went wrong: either the input couldn’t be parsed, or something about the input was invalid, or some business rule-based constraints about the input were not met and hence the 400. A simple string error code for these conditions seems to suffice given that most non-public APIs don’t need to be fancy about error reporting, just good enough to keep developers sane at development time.

                                                                  1. 7

                                                                    If we’re talking specifically about startups, and specifically about some kind of a generic web product (vs say you building and selling a proprietary database), then the technology of choice has rather humble impact on the outcome of the business.

                                                                    Maybe the pros and cons are very subtle and hard to measure, but you hardly ever hear about startups that failed because their web framework sucked, or their language couldn’t deliver what their customers wanted, or they couldn’t scale. Gosh, if only those engineers had stuck to Python instead of going with Haskell, said no one ever. Mostly, they die because the business was bad, the customer wasn’t there, the product wasn’t really making a difference or wasn’t marketed right etc. It wasn’t because they used OCaml instead of Java. Yeah, maybe they lost a couple of iterations that would have made all the difference, but who the heck really knows, that’s pure conjecture.

                                                                    At that scale, given a decent team, a lot of poor original choices can be fixed up in a reasonable amount of time. We’ve changed tooling a couple of times over the years based on the newly discovered needs of the product, it was fine. We moved much faster with every evolutionary step, and all learned something good in the process.

                                                                    1. 4

                                                                      Gosh, if only those engineers had stuck to Python instead of going with Haskell, said no one ever

                                                                      “Gosh, if only those engineers had stuck with Lisp instead of going to Python” was a common refrain in lisp communities, about reddit, for quite a lot longer than was justified by the continued existence of the company.

                                                                      I agree with your overall point, just pointing out that this sort of judgment is made quite frequently. It’s just made by technologists, not hire/fire managers. Unsurprising, really, as “successful” is defined differently depending on your background.

                                                                      1. 4

                                                                        Wasn’t Twitter kind of the poster child for rails doesn’t scale? I guess they resolved that, but it also seemed to involve a scala injection. Not sure what the big lesson is though.

                                                                        1. 9

                                                                          If Twitter had not used Rails early on and instead spent the time inventing whatever they are running on now, they would have never shipped and never grown to the scale that caused them to have to ditch Rails.

                                                                          1. 4

                                                                            Indeed. When I read a “it was bad; then it got better” story, I never know if the lesson is “don’t make this mistake” or “don’t worry, it’ll work out”.

                                                                            1. 1

                                                                              I get where you’re going with this, but this is not a counterfactual; it is post hoc, ergo prompter hoc.

                                                                              Many, many, many companies develop complex products quickly on top of non-exotic technologies and do just fine.

                                                                          2. 2

                                                                            I suspect that every discipline in an organization vastly overestimates their overall importance and contribution to the success of the business. It’s simple human egocentricity, that’s how we’re wired to work.

                                                                            And yes, I certainly hope that any developer working for a startup is measuring their success by the success of the business. If that’s not your primary goal at that role, then I’m not sure what exactly you’re up to there.

                                                                            1. 3

                                                                              Well, you could have other interests. :) I sell myself as a technologist, so I promise only to make decisions that, in my view, are technologically sound. What will or won’t make the company money is not something I claim to be my area of expertise.

                                                                              Besides not being my area, fundamentally I also don’t really care about business either. I see business as purely a means to an end: we have businesses because they arguably organize economic activity better than central planning does, or at least have done so in the past. But the goal is still to improve technology and/or its application in society. I don’t have any real attachment to the businesses per se; most businesses die, and that’s fine, because they’re expendable and easily replaceable by other businesses. What matters are their durable achievements. (That’s all especially applicable to startups: the only reason startups are interesting is that it is widely believed that startups are a way of catalyzing technological innovation.)

                                                                        1. 6

                                                                          I still don’t understand what all the hype is about. My only experience with Docker so far is that the team who look after our CI infrastructure decided to start “Docker-izing” everything recently. Rather than running build tasks directly on the build server, they’re run inside their own Docker containers. All of the little apps around the infrastructure (IRC bots, deployment tools, etc) run inside Docker containers.

                                                                          Perhaps I’m being too cynical, but I fail to see any gain in this. The apps are deployed to their own AWS instances anyway, so they don’t benefit from container isolation. We don’t do anything interesting with Docker on the build server like run jobs in parallel. What I do see is that we now have serious issues every few days. Docker is riddled with bugs (a recent one[0] keeps filling up the disk on the build server) and CoreOS which we now use instead of Ubuntu seems to be hugely unstable, with software updates breaking our apps.

                                                                          I want to understand the hype -I really do- but I guess so far Docker hasn’t “clicked” for me.

                                                                          [0] https://github.com/docker/docker/issues/8693

                                                                          1. 15

                                                                            Here’s a concrete gain with docker containers, it turns your app’s compilation phase (whether it’s an ‘actual’ comp phase, to a ‘git reset to this sha’ comp phase w/ an interpreted language like ruby) into a thing which not only compiles the code, but the environment needed to run that code. What that means practically is that now instead of having to configure a bespoke VM to run a particular version of your app, and worry about getting users correctly set up for each one, getting the right version of things installed an so on, that all happens far earlier and with a big simplifying assumption, “This container will only run one thing, ever” is a big deal.

                                                                            If I’m spinning up a cloud deployment, I’m looking to minimize cost, that means I’m looking to optimize the amount of horsepower I’m renting against the maintenance cost of keeping multiple apps running on that consolidated machine. For small deployments, the number of machines will be low enough, and the cost savings of consolidation small enough, that I probably won’t bother to consolidate. As I grow, I might try to put a couple apps on the same machine – if those apps assumed previously anything about that environment and I try to change it? Bugs for days. With Docker, I can let an app assume whatever it wants. Developers have a perfect replica of what I’m going to deploy that they can do whatever they want with. Not only that, changes to that environment are version controlled, meaning I can employ things like git bisect to figure out where a particular infrastructural problem pops up. Can you do that with an ansible or a chef? Sure, but have fun watching your VM go up and down for a few weeks, docker is way faster to build, and that makes things much nicer.

                                                                            So the big win from a pure ‘deploy the things’ Ops perspective is consistency. Consistency for developers (they get to test their code on precisely the same environment it will run in on production later. Consistency for me (an Ops guy), because I don’t have to set up a Ruby VM or a Java VM or a whatever, I just set up a Docker VM (probably a few) and toss a container on it. Ultimately, it means consistency in the deploy process, which means an app that’s more stable and responsive to change. It also vastly simplifies the deploy process – CI builds a container, I download it and replace the existing container with the new one. Done – that’s all, no think, “Oh, for the ruby one I have to do a git remote update ; git reset, but for Java I need to copy this war over, except this is the jboss stack so I need to blah blah blah”, it’s just a docker container, that stuff happens for me automatically. Sure, I could have scripting to do this at deploy time, but it’s a hell of a lot faster to download a container and run it, than have it chew through a long process that could’ve been done and cached well before now (indeed, thats a way to think about docker, it’s caches a deploy up to the actual ‘run on production’ part).

                                                                            Also, versions of the app are not just bundled with their environment, they’re bundled and versioned. There’s a SHA associated with each iteration of an app. If I need to roll back, it’s no longer a ‘how do I undo all these things?’ it’s ‘put the old docker container up, restore the database(s) to the last backup’, those databases, notably, can be docker containers, which you can save in-flight as incremental backups, versioning not only your code and it’s environment, but your database and all it’s data – that’s gold, I can afford drive space, but when backup restoration is as easy as ‘docker run db:v1.2.3-hourly-1500’, that’ll make any Ops person happy as a pig in shit.

                                                                            Now, where Docker really gets interesting is in encapsulating one-off processes. Developers (at least where I work) are finicky. They want hard things to be simple, they want really hard things to be trivial, and it always has to work or they whine and whine and whine. So take a build system – some of the devs use Intellij and its build system, some use Eclipse and its build system, some use the ant scripts we use in production, some pray to FSM to return them the compiled code. This resulting in some serious inconsistency. Compounding this is that some of the team is on Mac, some on Linux, some on Windows. How can we develop a totally consistent build process to run on three different platforms?

                                                                            Docker.

                                                                            I built a docker image that encapsulated the CI ant scripts. It has all the bells and whistles needed by the devs, and it exposes itself as a single-shot docker container. Now, instead of building via eclipse or intellij, they build via docker (they configured their IDEs to use this command as it’s build tool), they don’t have to remember the ant invocation, just docker run -v/output:/path/to/output build_the_app and away it goes. I get to have a build script that runs everywhere in a totally consistent way, on CI and on dev machines. The Devs get to avoid this whole class of “It’s in eclipse but not intellij” bugs, and I have a uniform interface to hide behind as I convert the build scripts from ant to gradle. It’s friggin' brilliant.

                                                                            This ability to get some relatively free cross-platformability, to encapsulate the environment things run in and moreover to version that environment, it’s wildly useful to a guy trying to tame a wild infrastructure. In my industry (healthcare), Auditability is king, and being able to tell an auditor that – this isn’t just the same code, it’s the same code, environment, and configuration, bit-for-bit, as we built in CI – that’s like music to their ears. Every single piece of the infrastructure has a totally unique identifier, I can prove beyond a shadow of a doubt that any piece of the infrastructure is exactly what I intended to put there. If docker did nothing else but encapsulate the environment and code and assign that identifier, it’d still be revolutionary. The other uses are just icing on the cake.

                                                                            EDIT: Forgot in my original – wrt CoreOS, I think it’s too unstable for use as well, we run Ubuntu 14.04 as our docker servers, we’re planning a move to RHEL and using Kubernetes to allow easier scaling. I haven’t seen the file handle bug hit me yet, indeed, my docker experience has been largely bug free (and my use of it is not small), but for certain it is a pretty nascent project; I still think the benefits outweigh the risks.

                                                                            1. 7

                                                                              Kubernetes

                                                                              Yay! =D

                                                                              Wonderful comment, I agree. Except I hope docker itself dies in a horrible horrible fire. The implementation is a nightmare and the docker guys don’t know how to run a project. I hope they get a real competitor soon. Proprietary, compatible forks exist, but I don’t know of any that intend to go open source.

                                                                              1. 4

                                                                                A lot of good projects start out poorly, I’m interested in seeing competitors, but would rather see the docker folks step up to the challenge and improve their product, rather than simply going away.

                                                                                1. 2

                                                                                  What about Rocket by CoreOS?

                                                                                  1. 1

                                                                                    I have high hopes. Unfortunately I don’t know much about it, and thus can’t comment on its quality. Rocket was announced shortly before I moved my systems to FreeBSD and jails.

                                                                                  2. 1

                                                                                    … the docker guys don’t know how to run a project.

                                                                                    Could you expand on this?

                                                                                    I maintain a few open source projects so I know how hard it can be, though none with the volume of Docker.

                                                                                    What are they doing wrong? What should they do to improve?

                                                                                    1. 4

                                                                                      They are building as many new features as they can without stabilizing their existing feature base, and bolting stuff on in order to get new features quickly rather than carefully considering the best designs. One artifact of this philosophy is the locking in docker, the devs seem to consider for about 3 seconds whether something is thread safe, and if not they just add a mutex. As a consequence, docker does basically nothing concurrently. Try adding 20 containers at the same time, it won’t happen at any reasonable speed. The problem is begging to be solved by the actor model, instead they choose sync.Mutex.

                                                                                      Edit: I’m exaggerating a bit, but parallel creation of containers is a big feature that only doesn’t exist because of bad design.

                                                                                      As for what they are doing wrong, they are blazing forward trying to impress the community with flashy features. Yet most of the serious users of docker just want the core functionality to work well. That’s my interpretation of what’s happening anyway.

                                                                                      1. 3

                                                                                        I assume you’ve run your share of open source projects, so you know that big sweeping changes are far easier to integrate early than later in a project’s life. Given that v1.0 is barely 6 months old and there are 867 contributors pumping features into the project, locking down the project and refusing any progress from the dozens (hundreds?) of vendors who are trying to make progress would be catastrophic to the youth and future of this project.

                                                                                        If you artificially stunt the growth of a project just to catch a breath and refactor some locks which may only be bothering people who aren’t willing to fix it, you create a very real risk of losing all momentum and starting to rot slowly while a fork continues happily on its way.

                                                                                        Perhaps our experiences of running open source projects differ vastly, but from my vantage point I see the team doing the best job they can—certainly a better job than I could.

                                                                                        If you’re up for opening a PR (or already have?) to increase the parallelism of the container creation, I’d be happy to collaborate on it with you.

                                                                                        1. 1

                                                                                          Those are very good points. Though I think there is a balance between adding features and ensuring quality. I don’t think docker is at risk of losing momentum right now. The only proprietary forks I know of are designed to fix the performance issues docker has.

                                                                                          Working on docker parallelism would be interesting, but I got out of containers for a reason. If I were to choose a containerization project at this time, I would prefer to build something docker-like for FreeBSD, but I’m not likely to do that either.

                                                                                    2. 8

                                                                                      I think you could simplify much of that to “it’s kind of like a fancy chroot plus a tarball of an entire system” or maybe “kind of like bsd jails or solaris zones”.

                                                                                      The experiences I have had with docker so far involved a team at $dayjob using it, and it simply making things more complex instead of less, reinforcing sloppy practices (docker containers become magical custom environment thingy and the app borderline refuses to work outside of it), and making any underlying issues with the shipped apps far more difficult to debug. Currently not a fan. Maybe it would be better if they used it better.

                                                                                      I guess it makes more sense if your app stack sprays files all over the place and/or is unfortunate enough to be bundled with most distros/oses (like python) and thus you are guaranteed that any installed version is going to be old as hell.

                                                                                      1. 3

                                                                                        Thank you so much for writing this up.

                                                                                        1. 2

                                                                                          If I’m understanding correctly, the big advantage is that I can deploy as many applications as I want on a machine and I can have their environments be completely isolated, meaning I don’t actually ever have to worry about other applications on that box requiring a specific version of something I might want a different version of etc? Seems like that would make a lot of sense once you have a large development team that loves to pile up a lot of applications in to the same machines, and they don’t want to care about what the other teams are doing.

                                                                                          Interestingly enough we drop a dozen different applications on our boxes, but our tooling is sufficiently consistent that we don’t too often run into issues of conflicting dependencies. Helps to ship binaries directly to production, no need for VMs or interpreters, but that addresses only a small fraction of the potential issues I can foresee.

                                                                                          1. 4

                                                                                            That’s a big advantage. The environment-isolation is nice because it allows dependencies and their configurations to move independently. It also acts as a nice organizing principle.

                                                                                            In my case, we support 2 main stacks (Ruby, JVM/Tomcat) and a few other smaller stacks. We also use 3 different databases (Oracle 11g, Oracle 12c, and Redis). This is split across about a dozen apps. Some of the Java apps run on 1.8, some on 1.7, one on 1.6. The Ruby apps are newer and all run on 2.1.2. Upgrading through versions of software is a somewhat laborious process because we’re pretty heavily regulated (we work with clinical trials, which means working with drugs, which means we’re good buddies with the FDA guy (by which I mean he hates us and wants to fine us into oblivion, which is the natural state of the FDA auditor)). Docker acts as a really convenient way to unify that mass of different stacks into a single deployable unit.

                                                                                            That’s not the only advantage, though – the ability to encapsulate execution environment and make it part of your revision history is valuable even if you only have one stack. Docker is lightweight, compared to Vagrant+chef / Vagrant+puppet / Vagrant+ansible, it’s just a step above shell scripts, so it’s easy to bring into your codebase (one file in your repo), and it makes it so you can have a consistent environment which is disposable, versioned, and versioned-after-compiled as well. The latter meaning that the product is also associated with a ‘version’ which is unique to the Dockerfile that created it, that’s useful for ensuring the right thing is going to production (though admittedly that’s not super hard in general, it does assert a bit more about the environment then just an embedded git sha. That low barrier to entry can pay off in development/CI too – if you’ve ever fought with jenkins to get it to run your tests because you can’t figure out what user it wants to pretend to be today (can’t tell that I’ve had that problem, huh?) docker is a good solution. Jenkins can just run a docker container that has a ci-runner script, you mount the application in the container as a volume (by passing -v /code:$JENKINS_VAR_THAT_POINTS_AT_THE_REPO), and it’ll go to town. You can also use that same container locally, which eliminates the impedance problem of ‘passes on my machine, not on CI’. Ditto with builds (as I mentioned above.

                                                                                            Another neat idea is to use docker to create a DIY build farm. Use some clustering service on top of a few beefy AWS servers, something really friendly like Shipyard. Give developers the shipyard tool and a container that builds your app and puts it in S3, then prints the URL. Now instead of building your app on your little dinky laptop, you can give it to Hanz and Franz up in cloud land and let them do the hard work. Your clustering service (Shipyard in this case) will automatically loadbalance the builds across workers, add some autoscaling and a big company with a nontrivial compile time can get a lot of value with comparatively little effort. Not only that, that same system can work as a general job-execution framework. Make a docker container that does the job, run it against the cloud, done. That isn’t just me speculating on that latter part, I have docker containers that do arbitrary jobs like uploading things to offsite storage and stuff, I don’t have the cluster set up to run them (we have clusters for actually running apps, I'ven’t built one for jobs), but it could be a pretty neat way to share resources.

                                                                                            1. 1

                                                                                              Interestingly enough we drop a dozen different applications on our boxes, but our tooling is sufficiently consistent that we don’t too often run into issues of conflicting dependencies.

                                                                                              That’s also been my experience. I hear a lot of complaints from people using Python and Ruby, though, who prefer virtualenv type setups, and now something like Docker. Perhaps it’s something to do with the way packages work in those ecosystems? I’ve used them only for small scripts so I don’t have much insight into the issues with packaging up big Python or Ruby apps.

                                                                                              In the C ecosystem, and others that work similarly, soname versioning has been good enough for my needs. It’s not really a problem to have multiple versions of a library installed simultaneously, and often Debian/Ubuntu have even taken care of the packaging for you—if you need libfoo3 for some apps and libfoo4 for others, no problem, just depend on both. And there’s always statically linking binaries as an escape hatch.