1. 18
  1.  

  2. 9

    Why are engineers so bad at paying other engineers for their work?

    Well, in this particular case, it’s because the result of the work sucks. Trying using SumoLogic to look at our own logs had been a very unpleasant experience for us. And after spending some time with the product I realize that it’s intentional: the product seemingly exists only for the purpose of selling consulting services with it. I.e., it doesn’t just solve the problem of aggregating a big amount of logs from several hosts, it also couples it with an unworkable browser-based UI, a custom query language, inability to post-process data. And a simple thing like tail -f is considered a “premium” feature, which they call LiveTail™, and which is so limited that it’s incompatible with most of their own filters. And then you get blog posts from SumoLogic condescendingly telling you that “you can only do so much with command line tools” and if you want to be a grown-up, you should start by attending a workshop that would teach you how to use their system.

    [/rant]

    Now, I understand that the general gist of the post is not about it, and I probably agree with the one that is there. But it did mention SumoLogic, so it got me started :-)

    1. 8

      It’s hard to take an article seriously if it considers security to be a commodity you can buy.

      1. 2

        Security is a commodity, like any other. You can totally pay people to handle the hard bits for you, and they’ll do a good job.

        Even basic security by way of hosted platforms like Heroku and Amazon and email providers is reasonably paid for.

        1. 1

          Well it depends on how you look at it. From a global perspective, having non experts reinventing the wheel is more expensive and dangerous than using a prefabricated solution as cognito, auth0, okta and such. Considering that proper security is a full time job, commoditization of it can be the sensible thing to do.

        2. 6

          The real problem is that ‘hosted’ comes along for the ride with most business models.

          1. 2

            This. I would gladly pay a one time fee to license software. If my organization can stand up a server for our product we should be able to put up another one for logging/analytics/whatever else gets shoehorned into being a third party service. It shouldn’t be rocket science to sell a piece of software with an install script, or heroku procfile, or whatever config looks like for AWS.

            1. 1

              Hmm, I dunno. I mean, sure having a way to buy the software for those who self-host is great. But I would not always prefer it for a lot of non-core stuff.

              I get this argument for something like New Relic, which is really actually expensive (like $100 per machine). But operations work is pretty annoying, and having to not manage it is pretty great.

              We used to be running a self-hosted Sentry instance. And it was basically fine! Worked as intended. But because it got real use, we needed to maintain it. So we moved over to their hosting and pay the $30/month to not deal with this. And get updates and all the goodness.

          2. 5

            A provocative title, but pretty good content–especially how engineering tends to underestimate their time costs. And how we’re often bad at justifying the cost vs return of tools to management. A lot of the manager vs engineer headbutting I’ve seen results from talking past each other about the value of engineerings time being better spent here or there.

            1. 4

              Not nearly as much as management underestimates their time cost.

              Want to subscribe to one of these services?

              That takes money.

              Any idea just how much engineering you can do for the time cost of engaging management to spend money (for the life time of their product)?

              Any idea of how much fun it is for an engineer to do that?

              And when you need to run up another service for something with a tight dead line?

              And you lay out the schedule and you realise that by far the largest chunk of real time will be getting management to OK the spend?

              And if you jump through all these hoops…. you look at the Version Control logs and realise that CEO’s come and go like mayflies but the code lives on and on… and you know for certain the service you bought will shutdown somewhere in the life of the code…. and everything you engineered to rely on it will die.

              The first question I ask when I engineer a dependency is “Can I pull the source into my repo? Can I build it and debug it and patch it? Is there a viable/active upstream I can send patches, to that will respond and mainline them? Is there an active community around it? If upstream dies or goes a different way… will my code die or rumble on?”

              1. 2

                On the flip side, there is “Not Invented Here!” and “But my use case is 1% different than the standard use case, so I need to write my own version!” It’s all very organization dependent, of course, as to which ends up being the reason. And sometimes you do have to roll your own because that is the fastest route to completion.

            2. 5

              Yup I really liked this article. Way too many engineers treat buy versus build as a non decision and succumb to NIH.

              Another thing that I think tends to make people avoid buy is that too many commercial solutions don’t take into account the fact that many companies don’t and CAN’T for regulatory reasons trust services that exist on somebody else’s platform / cloud.

              1. 5

                I totally subscribe to the points that the author makes, regarding making poor trade-off decisions for build vs buy.

                Logging and metrics are a great example of these. However, in my small organisations, the cost for 20 application servers servers (just servers) for metrics was greater than the actual server leasing costs! Madness from Microservices!

                Logging also has the unfortunate habit of leaking passwords or credentials when developers forget that logging is actually, well, logged somewhere, or debug/dev logging sneaks into production. Privacy and in some cases evidential tamper-free trails is critically important.

                Once you’ve got metrics being stored, you immediately find that the real effort – and value – is in creating and layering business-specific graphs, reports, and alerts, on top of that raw data. No hosted solution can ever help you with this as its always bespoke.

                In both cases, setting up our own logging & monitoring tools was trivial effort, on our own kit. Once you’ve done that, the expertise is portable to almost any other business or customer, and you can knuckle down to the real effort of delivering business value through data insights.

                1. 3

                  collectd, graphite, riemann, graylog, influxdb, prometheus etc all deliver a hell of a punch on their own if you want to go the opensource route. Once you’ve developed a metrics/monitoring/logging stack once, you’re good to re-use it every time. The long-term payoff is huge.

                2. 2

                  We used an open source third party program for one of our components. Because it was open source, we were able to modify it (add some additional logging for our needs). It wasn’t without issues though (resource leaks over an extended period of time) as well as it doing way too much—way more than we use (which I read as: it’s a larger attack surface).

                  We looked at using the latest version of the software (we were at x.y.z and we’re looking at x.y.z+1) but so much changed that we would be starting over with adding our changes, and if we’re going to do that, why not just scrap the program entirely (there’s no indication that the new version would fix the issues we had) and roll our own? It does just what we need and because the code is simpler, we will have an easier time debugging it.