1. 26
  1.  

  2. 11

    I don’t dislike this essay itself, but I dislike the conclusion that MBAs and startup executives come away with when they read it. They learn that JPL levels of code quality are extremely expensive, so they give up on code quality entirely. You can get most of the way there, much more cheaply, just by hiring better programmers and using modern languages.

    If you use a functional language like Haskell or Erlang, and you hire competent programmers, you can improve your software quality massively. You can cut your bug rate by at least an order of magnitude. That matters. If nothing else, it affects morale. Many bugs aren’t directly economically harmful, but talented engineers don’t want to work in a place where shitty software is tolerated. That slew of hundreds of “we’ve given up on trying to understand that” warnings might not mean anything to the business (because the code still works) but it can drive out talent.

    A typical commercial program probably doesn’t need JPL-level code quality. However, the current “Agile” standard is pathetic. It’s not just about bugs, but ill-thought-out features and code complexity and architectural sloppiness. Almost all corporate code is shameful, unmaintainable, and destined to cost more if it will be maintained than it would cost to replace it. This is because the managerial levels of our industry are full of slimeballs who’ve been trying for decades to make massive teams of talentless, disengaged, commodity Scrum programmers in open-plan offices into a replacement for those ornery and supposedly expensive experts (who are actually the bargain). It’s fucking disgusting and the whole movement needs to die a fiery death.

    Does every line of code need to be NASA quality? Probably not. On the other hand, there are ways to reduce defect-related cost by at least an order of magnitude while keeping labor costs relatively constant. Get rid of open-plan offices. Use a language with static typing when possible. Avoid having deadlines set over programmers unless absolutely necessary (i.e. a rocket is launching next month). Hire competent programmers who might be older or female instead of loyal “cultural fits”.

    1. 11

      The difficulty I have in communicating this to the MBA comes from the fact that I don’t know that we’ll cut our bug rate by an order of magnitude, I just think we will. We (dev team at $work) have considered the tradeoffs of using Elm for our future frontend work, have effectively the whole dev team on board, and the effort is stalled there - there’s no obvious “hey management, here is the rationale for making this decision” tentpole, and the empirical evidence is ambiguous at best.

      1. 3

        You may find this anecdote worth your time in making that decision.

        But if this is a technical decision that your team has already made, I’d say begin using Elm. Do not ask management to make technical decisions.

        1. 4

          The problem with any human-performance argument is that the objective stuff is garbage and the useful information is subjective. For example, I can’t put meaningful numbers behind what we both know, which is that a statically typed language reduces bug-related costs (technical debt) and improves job satisfaction (a) if people know how to use it, and (b) on the (broad but generally correct) assumption that you’ll get better people with Haskell and Elm. Unfortunately, the only way one gets to demonstrate this (because it is a subjective argument) is to get a business person to trust one to take months or years to prove it the hard way, by building better companies.

          I came to a realization at some point in my career that most business executives aren’t actually idiots or psychopaths. The problem is that they need lieutenants and tend to trust idiots and psychopaths to fill such roles. When that doesn’t happen, it’s amazing and you can spend time actually doing work instead of justifying work (i.e. playing politics). Unfortunately, the more common pattern is dismal.

        2. 4

          There’s a problem where both the cost of preventing a bug and the bug itself are logarithmic. The cost to a company to fix a bug a week is not 4x the cost to fix a bug a month. It’s probably about the same, drowned out by other factors. So even if a solution to go from four bugs a month to one is cheap, it’s not free, and therefore potentially a net loss.

          I think order of magnitude is the bare minimum. If you can’t promise 1/10 the bugs, don’t even talk to me.

          I’ll stipulate for this thread that erlang is 10x better. It’s also a sea change, and IMHO quite hard to impose after the fact.

          lmm’s comment also touched on this. The “minimum viable product” may still be making 90% of its users happy. That is simultaneously a disappointingly low bar and hard to meaningfully improve upon.

        3. 5

          I’ve been thinking more about this since our last conversation. My experience is still that any technology that makes it cheaper to lower the defect rate is going to result in faster/cheaper development and more-or-less constant defect rate.

          One explanation I’m playing with: a software organization might spent 50% of its time on activities intended primarily to prevent defects (testing, tooling, QA, …). Whereas their defect rate will probably be at a point where 95% of potential customers are happy with it. So there’s diminishing returns there. If you have a technology that makes defect finding twice as efficient, you can keep spending 50% of your time, and now 97.5% of potential customers are happy with your defect rate. Or you can switch to spending 33% of your time finding defects and 66% writing features, and increase your potential market by 33% (because you increased your feature base by that much) at the same defect rate.

          This suggests an equilibrium point where you’re spending the same proportion of your time finding defects as the proportion of your potential customers who think your defect rate is too high. If taken at face value (and it’s very much a spherical-cow notion) that suggests that companies may presently be writing software that’s not buggy enough (or rather, has a lower defect rate than their optimal).

          Compare the fact that having more efficient electrical devices makes people use more electricity, not less - as using the devices becomes cheaper, more use cases become viable.

          1. 7

            This is an interesting article but I’d like to see something that extrapolated these two situations.

            • We write correct software, software with bugs in it, because the cost of not doing so is prohibitively high.
            • Companies run on massive pieces of software because the usual cost saving is so huge they can’t afford not to.

            What this seems to imply is that we will see larger and larger companies bitten by buggy software. The exact details would vary from industry to industry. The most vulnerable companies seem to be those that aren’t software companies but still rely on huge amount of software - we’ve seen stories of Toyota’s horrible engine software, of the problems of VW, etc.

            The macroeconomics involved here would seem to involve an increasingly chaotic market place, reinforcing the overall trend of company X vanishing when their software process explodes but other companies bounding ahead by being willing to take on technical debt until the moment of implosion - see Evernote etc. In the case of Evernote, we can all volunteer ways they could have “had their cake and eaten it too”. But that’s beside the point, broadly the tradeoff between creating maintainable, readable software and creating immediately profitable, featureful software is going to remain in practice and so companies will grow by doing the latter. Twitter and Evernote both grew with this model, Twitter’s “stack” is famous for being the most horrible piece of garbage ever (I remember meeting the head of Twitter operations who said their largest expense had become electricity from starting a ruby instance for something like each tweet). Unlike Evernote, Twitter’s business was promising enough, it’s software’s operations simple enough, that it wound up with enough money and expertise to fix it’s mess and move on. So the economic question of when to settle for bad software is far from settled. Especially, companies will be expanding in the future, there will be more producers of terrible software.

            I would mention that any methodology that aims to fix this, that claims to be able to fix this, would have to show methodology that is easier to use, more productive and more reliable. Otherwise, the economics of cutting corners is going to still be there.

            The situation of simple electrical wiring is instructive. If we had a situation where the laws of electricity were well know to electricians and architects but there were no electrical codes or inspections or similar related codified standards, fires and explosions from electricity would be common - the economic incentives of a builder to cuts corners would only be opposed by a home or office buyer’s ability to look at the thing and see that it worked. Of course software isn’t like wiring in the sense that there isn’t any explainable-to-a-moron standards of software development that prevents egregious bugs. But software is similar to electrical wiring in the sense that it can fail catastrophically with the failure point distant from the starting point. Of course, software doesn’t fail as often or as dangerously as poor wiring but it’s less predictable failures still make for interesting thing.

            One way this quandary might resolve itself is if software quality might becomes a criteria investors used. Of course, since the aspect of a company’s operations are generally well-hidden, this would be difficult. If there was some mandate to force code open, at least open enough to allow inspection if said code ran things that could have impact beyond a company’s immediate customers, that might make code quality a criteria usable by investors. Of course, we know all the barriers against this.

            [insert come conclusion here]

            1. 2

              There are some irrelevant details there (such as gender make up and dress code - those are probably federal laws and culture rather than anything to do with writing correct software. I’m sure the people making the federal medical insurance exchange had the same superficial culture ).

              I think the relevant details for this case study are:

              1. The head of the team signs a document that might be legally binding in case the software fails
              2. The failure of the software has huge costs and a huge personal stake (death of people you know, national pride)

              If you take away an item, such as 2 (deaths of people) you get things like Mars Climate orbiter. If you take away a detail of item 2, such as deaths of people you know, you get things like fighter/bomber avionics that reboot (I forget if this is F-22 or the stealth fighter) or engines that suddenly reboot after an integer overflow in a clock (The 777 or the 787 I think).

              I suspect that enforcing 1 will lead to less, more costly software that works better.

              1. 2

                After working in a variety of shops on a variety of products: this is about what it comes down to. If you want to want to improve this, use technologies that trim bugs earlier, or work in an industry where users care about bugs. Pick one.

                It’s my opinion that strong typing is the effective minimum to start the conversation (I am now inspired to hack on some stuff).

                1. 1

                  I like the “Real-World Testing” link that he added at the bottom. One thing you get from interaction with users is more chances to find out you’ve designed something wrong, i.e., you’ve done a correct (enough) implementation of something people don’t want or like to use. Like with more clear-cut bugs, post-release is the worst time to discover this sort of problem, so customer input up front and internal eyeballs on the design can be valuable.

                  Another thing, orthogonal to the price or cost of bugs, is getting the org to care, i.e., making sure the true costs of bugs are weighed in decision making. One hack I’ve seen is giving people a taste of life outside their specific role, like how Google has devs spend a tiny bit of time as site reliability engineer and it helps get software bugs that cause operational issues addressed. At work, all devs and sysadmins and the CEO spend some time doing support (apparently Basecamp does this too), and we get design feedback, ideas, details that help us hunt down bugs. It’s not like this is something I can A/B test, but I feel like being that much closer to the client makes it a little harder to get too detached from users' priorities, or, put another way, encourages empathy.

                  Clearly some things differ wildly across shops and types of software. (Embedded firmware authors don’t tend to get the luxury of fixing bugs in the wild so much. Much software has a better-defined spec than a consumer-facing app.) One theme I do sense across different types of development is that there’s a breadth of approaches and concerns that the org manages to juggle.

                  This isn’t to detract from OP’s point: making quality cheaper is a game-changer for software. Like anybody I spend a ridiculous amount of time rooting out straightforward bugs (cannot concatenate ‘str’ and ‘int’) and I <3 good tools. I’m working with Python for $dayjob so maybe I should give Hypothesis a spin!

                  1. 1

                    “aggressively pursue static code analysis” seems to be one way to make it cheaper to find bugs. Another would be to use languages that enable totality checking, and limit mutable state. (Although Its hard to come up with empirical evidence for this.)