1. 67
  1.  

  2. 18

    Another example of inadvertent algorithmic cruelty.

    1. 9

      Note that software is not the problem here: it’s the humans. Before the advent of computers humans used to cause the same kinds of grief by e.g. mindlessly sending a letter in response to noting a vacancy. It depended on the fallible humans that happened to be responsible for the task and it was hard to guarantee wouldn’t happen again, while now a software change can ensure it won’t happen again.

      1. 2

        That last part is the crux, but it’s also really, really hard to get right. A software change could ensure this won’t happen again, but will the software actually get fixed? If it does, will the new version actually get deployed everywhere the previous version had been running? What are the criteria the software uses to decide it shouldn’t send an email, and are there other situations that were missed even in the new version?

        Humans aren’t always great at empathy, either, but the sheer prevalence of software now (e.g. compared to 30–40 years ago) means that there are orders of magnitude more situations now in which this kind of thing could happen. To put it cynically, software now gives us the ability to be callous at scale. It seems like it would take good intentions and real empathy and careful execution, at every stage of the development process, to prevent software from inflicting the kind of pain the OP describes.

      2. 6

        In this situation, I’d rather deal an automated message like this than else, to be honest.

        1. 2

          I’m not sure this is cruel. To paraphrase an orangsite comment: Imagine what would have happened automatically if this message hadn’t been sent. This is arguably the best outcome.

          1. 2

            The thing probably should’ve sent a different email. Less “GO LIONS!” chearleading would’ve been nice.

            (the nasty trick is making sure you don’t accidentally send the “Sorry for your loss” email when nobody’s actually dead)