1. 47
  1.  

  2. 13

    As an ops-turned-programmer guy, this is golden.

    Docker is even worse, as it fundamentally lies about the ability to run one command and get the same thing. docker pull ubuntu on different days and you’ll get different hashes of the resulting output – it’s maddening. That might be okay for the convenience factor if Docker itself was able to be managed properly, but… I’ll just leave this excerpt from the official docs here:

    Upgrade Docker

    To install the latest version of Docker, use the standard -N flag with wget:

    $ wget -N https://get.docker.com/ | sh

    It’s turtles all the way down, and then at the bottom a large middle finger.

    1. 3

      this is just ridiculous when you consider that their “setup script” is effectively a glorified wrapper to create an apt/yum/etc source entry

      1. 1

        Node does the same thing. What’s ridiculous about it?

        1. 6

          They’re recommending piping an arbitrary shell script from the internet straight into a shell.

          There is a huge discussion about all the reasons this is terrible over on the HN thread about this same article, but basically it’s both dangerous and pointless.

          They’re already providing an apt repo - you could add the repo with two (edit: sorry, i guess it’s four, but the last two are ridiculously short) commands on the shell, and be infinitely more aware of what you’re doing.

          1. 2

            It’s not dangerous and it’s not pointless.

            It’s not dangerous because it’s not “the internet” at large, but whoever controls deb.nodesource.com and the nodesource https certs. This is the same authentication mechanism you’ll be using to get your binaries. Why wouldn’t you trust them to provide a bash script to run an installer if you trust them to provide the binaries which manage your infrastructure or serve your website?

            The point is to make a “simple and easy” installer that takes 3 seconds for anyone to set up instead of a minute or longer (for someone not familiar with adding sources to their package manager). Just like a page that takes >1s to load will lead people to abandon it, installation instructions which involve adding things to your package manager lead to fewer installs.

            1. 4

              I really suggest you go read the HN thread. There are plenty of ways it’s dangerous, even down the simple fact that if someone can’t follow 4 lines of instructions (they literally have to create a 1 line text file, and run three commands) they probably shouldn’t be trying to run a server in the first place.

              1. 1

                My google-fu is failing me… could you link it, please?

                I think it makes sense to do the manual add-repository on scripts which set up your production machines to be super clear about what’s going on to someone who might be looking at it later, but if you’re just playing around, the copy-paste pipe-to-bash shortcut is a nice one to have, and helps more people get it up and running quickly. I certainly don’t see it as a valid reason to judge a project…

                1. 2

                  https://news.ycombinator.com/item?id=9419188

                  Forewarning - it’s quite a long thread.

                  I’m not dismissing the project - I’m simply saying that the work to add one manually is hardly any more than it is to actually run their script, and it’s pretty rudimentary stuff like creating/editing a 1 line text file, and running a couple of commands.

                  Percona have a good example of this in the documentation for their Apt repo: http://www.percona.com/doc/percona-server/5.6/installation/apt_repo.html

                  1. 1

                    Much huff and puff and group-think on that thread and very little reasoning things through.

                    Percona have a good example of this in the documentation for their Apt repo

                    What does that last command do? Does it execute arbitrary code?

                    1. 1

                      As I said, the reasons against using these curl | sh scripts are numerous, and expressed in that thread.

                      The last command installs signed packages from their repository. So no, it doesn’t run “arbitrary” commands in the same way a shell script can.

                      1. 1

                        Could you name one of these numerous reasons? The original “dangerous and pointless” doesn’t seem to hold water.

                        What happens when these packages are installed? Can they include executables that can run as part of the installation? And how did you get the public keys used to verify the signatures? You copy-pasted them from the site you trust because of HTTPS.

                        Sounds to me as if the security model is either identical or nearly identical - you’re trusting Starfield (the CA) to verify the ownership of a domain and someone who can modify the site at that domain to run arbitrary code on your system. Am I missing anything?

    2. 7

      This whole Matryoshka doll world of virtual machines and containers is to me a declaration of failure of Operating Systems…. or, perhaps more likely, as the guys says, operating system sysadmins.

      Everything all these Matryoshka’s are attempting to achieve are, if you look at the old brochures, what Operating Systems where suppose to give you.

      Hmm. Actually I think the OP is right. Matryoshka systems are the symptom.

      1. 2

        I wouldn’t say Operating Systems. FreeBSD and Debian have always been easy to deal with. Heck I would probably guess Windows Systems are also easy to deal with.

        I suppose one reason for this container complexity is because of dealing with scale issues before any scaling is required !

        Containers want to solve the scaling issue by

        • Declarative Programming
        • Auto-Scaling

        whilst having zero-downtime. That’s hard to solve.

        1. 1

          Hmm. If that was truly the desire, I’d head for something like https://nixos.org/nix/ or http://www.gnu.org/software/guix/

          But personally I think the whole think reeks of schedule pressure and technical debt writ Large.

      2. 4

        I can’t disagree, but containers are only the messenger; they make this obliviousness to trust issues visible. The author does say that, though!

        1. 6

          Trust is not the main issue here. The real issue is the culture of blobs and pinning libraries' versions. Part of the problem is the lack of backward compatibility in software and libraries.

          So you end up with software depending on very old libraries (version of the said librairy which is insecure). On the other hand, in a distribution ecosystem, libraries are shared between programs, and kept up to date.

          The blob culture is docker, vm disk images, nix, wget | sudo bash,…

          1. 8

            I think there’s a cultural problem, yes, but it’s largely a result of the design of Unix no longer being fit for purpose, given the way that technology has evolved from the ‘70s. Much of the accumulated wisdom of sysadmins is dealing with decisions taken when Bell Labs was full of swinging neckbeards and VAX 11/750s, and has little to offer to the world where there simply aren’t users in the Unix sense any longer.

            People are flopping around like gaffed fish not because they want to but because the received knowledge no longer fits the facts and they are being asked to make decisions without any confidence in what they know. We as an industry simply do not know how to build complex systems; the world of Unix software has finally tipped over the edge of the curve from “useful if obscure” to “pointless cargo-cult nonsense” in the minds of many.

            1. 3

              I’m somewhat befuddled how the problem reappeared in such a virulent form. Upgrades vs. compatibility are an issue, yes, but that’s what interface-version numbering addresses. If a new library version is compatible in interface, you upgrade in place; if it has incompatible changes, you bump the soname and install alongside, without replacing the old one. At some point you have to decide when to retire support for old versions, but until then they can live side-by-side. For example there is no problem on Debian with having libfoo2 and libfoo3 both installed. In the web world, REST interface design even seems to sort-of follow this approach.

              I don’t see how baking in some foojs1.2.3 into my Docker image helps this. Either 1.2.3 is an interface that is still supported, still gets bugfixes, etc., in which case you could give it a distinguishing interface-version name and just have it live in the parent image. Or it is not supported, in which case it is not safe for me to have it baked into an image at all. Is it some deficiency of version handling in tools like npm/CPAN/etc.?

              1. 2

                I mean, version pinning is a defensive move. Unfortunately, library creators differ very dramatically in their attitude towards backward compatibility. Yes, it would be ideal if nobody ever used libraries that can’t be trusted to be stable and well-tested… but even in that scenario, nobody wants to spend a month making sure that dependency upgrades don’t introduce regressions, three months into a tight six-month timeline. Version pinning makes a lot of sense at that timescale, as long as there’s a plan to not let that turn into versions that are years old…

            2. 7

              As I said elsewhere, I believe this is a result of the “Kumbaya” approach to project/team management, where everyone is more concerned with “everyone’s opinion is valid” and less concerned with technical skill in the required field.

              How many tech startups or even bootstrapped agency firms have zero dedicated (or even shared/part time) qualified ops/infrastructure staff?

              1. 16

                I dislike this characterization of it. It’s a phenomenon, but your phrasing almost suggests that you see a connection to political ideology, which I think is unwarranted.

                I conceptualize this as the phenomenon of upper management having no idea how to evaluate technical questions, and therefore being at the mercy of whoever they hire as the top technical person.

                People seeking top-tech-person jobs from startups view themselves as rising stars and their primary career goal is to be able to make the technical decisions they believe in, rather than attempting to learn from other perspectives before making important commitments.

                This would work, if they were right about everything. Nobody is right about everything.

                So we wind up with mandates from on high that do not correspond to reality, and everybody who has to work with it has to decide whether to go along with it quietly, seek employment elsewhere, or die on this particular hill.

                Most people go along with it quietly.

                If I had to put a name on it, I’d call it rock-star programmer culture.

                1. 8

                  No, not at all - I’m not sure quite what political ideology you think I’m trying to tie it to, but I’m not.

                  I’m simply talking about a situation where you have a mass of developers, with no practical training or experience in sysadmin/ops/infrastructure/etc roles, and because there is no actual expertise, “everyones opinion is valid” - hence the name “Kumbaya” - no one gets offended, everyone is all happy happy - until the shit hits the fan because it turns out giving everyone in the company FTP access via a single account to a shared box was actually a bad idea.

                  Yes, an aspect of it comes from developers who somehow think the term “DevOps” means “Developers can do Ops” and that because they can write a simple http listener in nodejs, they can now run a mission critical system on it, but the real problem is that no one acknowledges none of these opinions about how to solve a particular problem are backed by actual experience or knowledge.

                  1. 7

                    Thanks - heartening to hear that you don’t see it as political. :) Historically, “no one gets offended” is a criticism some conservatives use for what they perceive some liberals to believe in, and the association of that critique with folk song traditions from the 1960s is very strong; that’s why it seemed to be coming from a politics angle to me. I realize that the meme has been around a long time now and has generalized beyond that context.

                    Do I get my diplomacy achievement now? :)

                    Yes, I’ve definitely been in situations where nobody had some area of needed expertise, and rather than recognize this, they muddled along as if there was no possible way to get that expertise. This can be through every idea getting a “yes”, or through everyone yelling loudly about whose pet idea will be used, or through reinventing the wheel as an in-house infrastructure project.

                    I don’t find it to only happen with regard to operations, although operations is one of the things most likely to be dramatically different from the expertise that is present, so it very often is devs trying to do ops, yes. But it can also be trying to build fake transactions on top of an eventually-consistent key-value store but not understanding what read-write consistency is, or writing a new ORM under the belief that it somehow isn’t an ORM, or… all of my personal examples relate to databases, but oh well. :)

                    I do agree very much that it’s a confusion of opinions with … experience and knowledge. In the specific problem domain. I don’t think we are really in disagreement, but I place a lot more emphasis than I think you do, on the specific situation where somebody is an expert on a particular problem, and they’re treated as experts on all problems.

                    In the somewhat healthier cases, where the person who’s respected by nontechnical management does know they don’t know everything, they may defer to other people - which can look like what you’re describing, with the decision being made on the basis of personality rather than engineering, because knowing who to defer to is hard, too!

                    1. 3

                      Ah right sorry, no I wasn’t trying to tie it to “PC” etc.

                      I have also seen the situation where someone is an expert (let’s say a senior, and experienced, developer) who it is then taken as granted they are an expert at ops. To me it’s very much like the way that non-tech people (i.e. relatives, friends) will assume someone in the industry does everything they see/hear about in news/pop culture. Friend is a sysadmin or network engineer, sees physicists on Big Bang Theory making a mobile app and says “hey thats like what you do, right?”. I think it’s more befuddling because these are people who should know better.

                      1. 2

                        Heh, good analogy to how non-technical people assume all computer people are fungible!

                        (No matter how many times I remind them that it’s literally been fifteen years since I last interacted with Windows.)

              2. 2

                AWS/Xen and Puppet/Ansible are great. I understand the point of them. I still don’t understand why I’d want a container instead. The container method is just full of problems in my mind.

                1. 5

                  The reason for containers is quite good, actually. As we run more services, it becomes preferable to increase the number of services per machine in order to save money. But services might conflict with each other, so a container isolates them. Docker is actually playing catchup to what FreeBSD figured out 15 years ago and Solaris implemented (better) shortly after. Illumos is basically an OS for running nothing but containers. Puppet/Ansible do not provide isolation and it becomes a game of “who else is running on the same machine as me” when something breaks.

                  1. 1

                    If your service is “messy” and affects other things running on the same box, multiple, smaller Xen VM’s can be a good solution. – edit –: this somewhat assumes that it’s a third party thing you can’t “fix”. If it’s first party (i.e. you are the developer) why is it “messy”?

                    Given that a very common docker setup is literally one program/daemon per container (i.e. a “LAMP” stack would become a web server container, a db container, and possibly a cache container) there is definitely a proportion of users that are using it in place of a more “regular” configuration management tool, and I would imagine these are largely developers, rather than Ops, as it remove the “requirement” for them to know how to tune/configure apache, varnish, memcache, mysql, postgres, redis, whatever - they just grab an existing image and go.