1. 14
  1.  

  2. 12

    The points about teamwork ring true, the teams have to work together.

    But I come from a history of sysadmin/ops work, way back, and saying the job was about putting out fires is damned near insulting.

    I, we, always did the best we could to have config management and repeatable deployments before the cloud even existed.

    Looking at the juniors of today, they put out fires too.

    1. 6

      That’s just it - repeatable setup of any software stack is pretty much the holy grail for competent technical staff managing computers - whether they’re mobile phones, user laptops/desktops, virtual servers, or room sized clusters.

      Some hipster idiot with a nodejs project and a travis file can tell himself he’s devops-ing all he wants, it’s when some other idiot hires him to run servers based not that, that I get worried.

      Rant/Not-Quite-Grey-Yet-Beard time:

      We wrote god damned batch scripts and deployed shit to remote workstations over 128K ISDN’s ~12 years ago, but even then there were teams in sister departments (same role, different part of the org, same task to achieve) who refused our offer to use our scripted setup, because they just hired more goons for a weekend to walk around a campus doing the same manual steps over and over. My point is basically what you said, automating setup is nothing new, its just more visible now.

      1. 5

        Nor do people remember cfengine or FAI.

        I installed a Dell PowerEdge 2850 in London using tftp and and FAI based in Helsinki in, like, 2006 or so. Not as badass as ISDN but maybe I’m younger. It was a cool thing though, just like installing at the office, but not. And using DRAC for that remotely. Good times!

        RCS in /etc/ where now you’d use etckeeper or - unfortunately - Docker.

        The DevOps stuff comes from a pedigree that is mostly forgotten :(

        1. 2

          I was pretty new at that point (graduated 3 years earlier), and it was ISDN because it was very rural, and not much else was available.

    2. 5

      Bad things happen when you let the stateless service expert have a go at fixing the distributed database. State is pets, not cattle. I’d bet 80%+ of companies don’t need to run their own databases though.

      Economies of scale make cloud providers cheaper and cheaper, pushing the inflection point at which it makes sense to run bare metal farther and farther out. There are other reasons than cost to stay out for the time being: legal, reliability, huge friction of moving out etc… But they are going away. The real reason Google heavily pushes k8s is to make it easier for orgs to move into GCE when they decide to make the switch. Let’s see if aws & azure add some sort of k8s support to sip the funnel :)

      Eventually much of the product and service world will be running on lambda / cloud functions / azure functions. No more capacity planning. Ops as we know it will be devs running a framework that handles the config management, deploys, etc…

      There will still be infra jobs at the cloud providers, but they sometimes look down their noses at outside talent, and I’m not sure I would be all that satisfied working at one, competing with tons of competent engineers for every last morsel of intellectually stimulating work.

      I’m focusing on multi-DC/cloud stateful systems for the next few years, then maybe it will be time to get a PhD in a country with a social safety net.

      1. 3

        I tend to agree except for

        Eventually much of the product and service world will be running on lambda / cloud functions / azure functions

        There will always be an argument against hard-locking yourself to a particular vendor, which is what the serverless models of today do. Additionally the incentive to migrate a fully-automated microservice’d k8s stack to a lambda stack is questionable - kinda like how Facebook still runs on a sort of monolith because hey - the friction caused by moving to microservices isn’t worth it yet, and I fail to see a ton of inherent value in lambda over a well designed microservice’d architecture.

        1. 1

          I actually just accepted a job offer at Serverless (the company), with the personal goal of helping people do cloud steering the way that CDN steering is done now, making the providers fight against each other for business.

          A lambda function can run a monolith or a stateless microservice, without capacity planning or (capacity related) on-call. You can even do FFI. I’ve worked on the kubernetes and mesos ecosystems, and the promise of serverless is much closer to what devs want to use in my biased opinion. They all have their uses, but my hypothesis is that most engineers really don’t want to think about containers or capacity. For stateful systems you need to do this work, but I think there’s no good reason we’re forcing most engineers to think about these things.

          We’re going to see things pretty similar to lambda start popping up on top of k8s and mesos. I doubt capacity will be that much of a problem for people to self-manage, since most orgs are like 7-25% utilized at peak.

          Right now, mesos & k8s have more tooling around visibility etc… but this is a temporary, non-durable advantage. Serverless now has last-mover advantage.

          1. 1

            I hope you can understand my skepticism, I work in Healthcare where things like serverless architecture are met with heavy grains of salt, so I’m a little disconnected from the product/service world. I’m an ops-person, and serverless architecture is intimidating leads me to be biased against it. I hope that I’ll still have a job in the container/automation space because it’s been really fun, maybe backending some of the serverless stuff is where I can end up. /streamofconsciousness

            1. 1

              There’s a looooooot of hype, and it is pretty immature. Lots of questions remain around visibility, debugging, keeping track of functions, service discovery, security, etc… There’s a lot coming down the pipe, and I think this will eventually​ be a nicer place to build products in general, but it’s the early days for sure. Also, thinking of what amounts to autoscaling without thinking about a VM/container or load balancing as a “function” or “serverless” is often a challenge and a source of abstraction abuse. I think of it as a deploy target rather than a way to structure your application.