1. 24

  2. 5

    I almost entirely agree with the article but am going to quibble about a specific point because I think the nuance on it matters:

    any real business:

    Knows what their acceptable defect rate is

    Is already operating at it

    I used to think this, but it isn’t really true.

    The truth is that “acceptable defect rate” is not a single number, it’s a balancing act between competing costs: The acceptable defect rate is one where lowering the defect rate costs more than the difference between the new and the old rate saves you.

    This means that you can change the acceptable defect rate by lowering the cost of finding defects - an improvement in rate that isn’t worth it at 100 person hours might be worth it at 50.

    1. 1

      I used to think this and obviously it’s theoretically true, but in practice (IME) there is very little elasticity. Clients don’t seem to compare prices in terms of defect rate; rather there’s a point at which they’ll drop a product/supplier. (And often only a few clients are truly important, again IME). So the defect rate/revenue curve ends up looking more like a step function.

      1. 1

        Well, no.

        Most companies engaged in software development have more than one client (and if they don’t, they’re de facto part of that client and the same argument applies to the client), and different clients have different thresholds and the sum of a large number of differing step functions looks like a curve.

        But moreover, even if this were true it would still not matter, because the defect rate you have above that threshold increases the level of client interaction you have as they report bugs, you have to figure them out, etc. etc. and this sort of interaction is expensive.

    2. 2
      • Idris if you really care
      • Haskell/F#/OCaml/Scala if you care a bit

      I’d love an elaboration on this thought. Anyone care to take a stab?

      1. 1

        The author is a fan of dependently typed languages.

        1. 4

          Sure, so I’ve heard a lot about Coq, but I hadn’t heard of many other large-scale uses of dependently typed languages before (such as Idris), much less tried any of them. I guess I should actually go and try some of the options.

          Edit: Okay, Idris looks awesome: http://docs.idris-lang.org/en/latest/faq/faq.html

          1. 2

            You know more than me then :) I wasn’t sure you’d heard of dependent types.

            Lots of folks around here can help you on that journey though. Might make for a good ask question on its own.

      2. 1

        “The free market” doesn’t work, because it doesn’t really exist, when it comes to people in organizations or technical choices. You don’t have an efficient market; you have organizational dynamics, human stupidity, and the power that accrues to people who are most skilled at exploiting said stupidity.

        Companies are punished in the very long term for dysfunction and underperformance, but it takes so damn long that the executive-type people who wreck companies (cost-externalizers) get promoted away from their damage before it happens. There are people who care about this, but they’re not the decision-makers, because most corporate decision-makers have the contacts to be just fine even if their current companies-of-residence go to hell.

        I don’t think it’s fair to trash C itself, though. LLVM is written in C++ and Haskell uses it. The fact is that it’s quite possible to write high-quality software in C. Otherwise, we’d be just as fucked in correctness-focused languages because of problems in the lower layers on which those compilers and runtimes depend. Great code certainly can be written in C. It just takes time and competence in those writing the code. Without a culture that is dedicated to code quality (as opposed to the more typical business-driven deadline culture) it will not exist. But it can.

        The problem is that most companies are run by short-term-thinking, anti-intellectual, nontechnical executives who will never budget the time and risk necessary to write high-quality code in any language. Typical short-termist executive imbeciles won’t budget the time to do things right in a language like C or like Ruby, and if you bring up Haskell you run into syphilitic objections about the supposed difficulty of hiring Haskell developers (as if people can’t learn it; if you’ve hired people who are incapable of learning Haskell, you have even more problems). An efficient market wouldn’t tolerate for such unqualified people to be in decision-making roles, but making an efficient market for people is impossible. Correctly aggregating price signals on a few hundred exchange-traded commodities is computationally easy for a market to do; picking leaders among people is pretty much impossible for it, because the feedback (macroscopic effects on organizational performance) occurs over years and the game ends up being won by people who are good at playing organizational-political games but are not good for anything else.

        TL:DR summary: many people do care about code quality and correctness. Unfortunately, those are not the people who tend to win at organizational politics and end up calling the shots.

        1. 1

          Some ML implementations have no C in their history. As I said, it is possible to write good code in C, but I do think that C makes it particularly hard even compared to its contemporaries and to languages with a similar systems focus.

        2. 1


          I don’t think Ruby is the fairest argument. MRI was written in c, Linux was written in c. MRI has a reputation as a terrible pig in performance, just ghastly in many ways. Linux indeed has bugs but it has a reputation as fairly performant.

          I agree with the general direction of the article, that with discipline and abstraction, infrastructure as code would be easy. But reminds me of a Kafka quote: “The crows like to insist that a single crow could destroy the heavens. This is incontestably true, but it says nothing against the heavens, because the heavens merely mean this: the impossibility of crows.”

          I would further add that part of what made MRI such a mess is neither c nor the skill of Matz but incredibly human-friendly but parser-unfriendly syntax of the Ruby language. I think we’ve covered the challenges of creating a human-friendly program formatting here before.