1. 16
  1. 5

    While this article looks at safety by analysing outcomes in a medical context, I think a lot of the thinking in there could be ported over to running the kind of software systems that many of us here are responsible for.

    The core idea resonated really strongly with me. We stand to learn a lot from from systems that are quietly successful, rather than focusing mostly on how we fixed the ones that loudly failed.

    It also spoke to an idea that I agree with strongly: approaches that think everything can be solved simply by adding another process or rule for people to follow doom us to the same sub-par outcomes we know today. Or as phrased more eloquently in the article:

    you cannot inspect safety or quality into a process: the people who do the process create safety

    1. 2

      It also suggests that policies don’t create our successes, which is probably not what most people want to hear.

    2. 2

      The best thing I have read on safety in years. Especially in realms where severe consequences are very low probability.

      On the silly side, I can’t help mention “Carter’s Corollary to Murphy’s Law”.

      Things only ever go right so that they may go more spectacularly wrong later.

      I believe the universe is run on this basis, and there is an underlying field called the “Murphy’s Law Potential”, much like an electric field.