1. 6
  1.  

  2. 1

    I enjoyed reading this article. I like the way it wanders between thought experiments, real (if contrived) programming experiments, test-breaking code, and VW’s test-breaking ETDD. But I’m not sure how I feel about the conclusion.

    I blame the testing regime here, for trusting the engine manufacturers too much.

    It seems to me that lots and lots of types of things have to go through some kind of adversarial compliance test before being sold / used. Emissions tests for cars is one, but what about safety tests? Does STDD lead to cars that pop out their airbags 0.2s before impact when they detect they’re on a planned collision run? Do my electronics' EM emissions increase past their conformance-test output when I bring them home?

    There is an external negative incentive in most of these contexts that, I hope, for the most part, gets manufacturers to comply with the intent of the tests: getting caught cheating has costs. Fines, loss of rights, public perception, etc. If these external factors can align the incentives of testers and manufacturers, then maybe we’re all going to be ok?

    For software developers, on the other hand, there is usually very little at stake, even in adversarial contexts. If my promise/A+ implementation hacks the test to pass an edge case nobody uses, I’ll feel a little bit bad, and on person might get pretty frustrated one time and open a GitHub issue, and maybe even fix it for me, and that’s about it.


    The article links to this chart that illustrates VW’s detection of a test environment vs. a real driving situation. It’s awesome. It has no legend, so here’s the explanation from the article:

    The horizontal axis is the amount of time since the Volkswagen engine was turned on. The vertical axis is the distance driven. The coloured lines mark pre-programmed settings inside the engine control unit; if the usage profile crosses any of these coloured lines, it triggers a change in behaviour.