1. 6
    1. 3

      A few years ago I went to a talk by Uncle Bob at a conference, and he was pushing some developer pledge which included that all developers would commit to using TDD in all projects. As a Haskell developers, my immediate response was but what about types? - I think since then I’ve worked at a few places where testing was done quite well, and have somewhat changed my view on testing, but still think that TDD is too much and as the article says, locks down the code too soon and makes refactoring painful.

      I’ve realised that my goal for writing tests for Haskell code is to first develop a failing test when a bug is found, and then make it impossible to write that bug in the first place, so I cannot write the test. This isn’t always possible, and in many cases the outer edges of the code I want to have tests for - parsing external input is usually of the form String -> Either Error SomeWellFormedType, and I want tests to ensure that all the strings I expect to parse do, and the ones I expect not to don’t, but within the app, working as hard as possible to make impossible states unrepresentable is often not too hard.

    2. 1

      If the code is supposed to work then I don’t want to touch the code at all. Instead, I add the test but make sure the test is supposed to fail, perhaps by saying the factors of 91 are 5 and 13. Seeing the failure is a check that I didn’t make a stupid mistake in writing the test. Then I fix the test and see that it passes.

      The unit and the test are integrated and thus rely upon each other’s correctness. In the form of a truth table:

       Unit Correct | Test Correct | Test Passes
      --------------+--------------+-------------
            ✅      |     ✅       |     ✅ (1)
            ✅      |     ❌       |     ?  (2)
            ❌      |     ✅       |     ❌ (3)
            ❌      |     ❌       |     ?  (4)
      

      No one advocates writing a test and never seeing it fail; otherwise the correctness of the unit or the test are indeterminate.

      TDD advocates write a failing case (case 3) and then makes it pass (case 1).

      The author’s logic and process assumes “the code is supposed to work.” But writing an incorrect test and then fixing it, without changing the unit, doesn’t show anything at all. And I have lost count of the times that I’ve seen a test pass without actually checking the underlying unit.

      When I add a test that should pass, then I add said test and see it pass (case 1) and then break the unit (case 3) to get confidence in the test’s correctness. Fixing the unit is but a git checkout -- unit away…