1. 27
  1. 5

    This seems like a good compromise to me. The tools that provide safety eventually fail, but you need social pressure to avoid devs saying ‘f*** it. We’ll do it live.’ every day.

    1. 3

      A bit on the opposite end but this has always been one of my favorite CLI flags on a piece of software (conceptually, I don’t think I ever needed to use it): MySQL’s --i-am-a-dummy https://www.percona.com/blog/2017/03/06/mysql-i-am-a-dummy/

      1. 1

        The last thing you want is to normalize the use of a safety override. Best practices in software aren’t usually “written in blood” like they are with “real” engineering disciplines, but they still need to be considered. The number of outages, privacy leaks, data loss events and other terrible things could be greatly reduced if we could just learn from our own collective history.

        I’m extremely wary of this kind of thinking. If you give people a tool with sharp edges and enable them to use it in production environments where the blast radius is large, someone, someday will use it in anger without actually understanding the implications and bring down the house.

        There are some EPIC internal post mortems where I work due to just such occurrences.

        So, while I can appreciate that there needs to be an ‘in case of emergency break glass’ tool around for when it really IS an emergency and the regular tools won’t do, I think, to the author’s point, the mechanisms to protect against ill conceived use AND the auditing around ‘you did an incredibly dangerous thing’ needs to be very crisp and thorough to allow the organization to react appropriately when it happens.

        It’s been my experience that often even when such safeguards are put in place the potential for misuse and unintentional abuse is still there, and over time I’ve seen most such tools either removed or made sufficiently difficult to use that they’re not a threat to production systems.