1. 18
  1.  

  2. 8

    I did a talk about this called ‘The Deep Synergy Between Testability And Good Design.’ The case I made was that difficulty in unit testing often indicates design problems. I listed about 10 cases where that appeared to be the case. The crux of my argument was: writing tests is writing a program to understand your code. If it’s hard to do that, it’s probably hard to understand the code also.

    I like this paper because it shows some real empirical correlation. The places where it fails are very interesting, particularly the cases of long methods and complexity. I suspect that testing enables complexity because tests allow us to write code that is ‘correct’ but still not easy to understand at a glance, whereas something like parameter count for methods tends to go lower because it’s extra work to write tests for methods with more parameters.

    This space where the ergonomics of practice ‘nudge’ design is very interesting.

    1. 5

      I think I tried out the tests-first approach to unit testing probably 5 times until I finally understood it. Tests-first helped me to write better software not only because the unit tests would have my back when refactoring, but becuse tests-first unit testing made me write software with a better structure.

      1. 1

        Would you be willing to share your experiences? What mistakes did you make during the first 4 attempts? What made it finally clicked?

        1. 1

          I think I had several misconception on unit tests. I was under the impression that unit tests must test at the implementation-detail level, when now I am mostly working on a functional level. So in my first attempts I would think of a solution in my head, then write a test for that solution (that contained assumptions on the implementation of the function) and then I wrote the implementation, so in fact it wasn’t really tests first. Then I would write several tests at once in that style. then implement instead of writing one test and making a minimal implementation that would just fullfil the test. I also tried to use mocks/fakes for tests a lot, and that made matters worse. Nowadays I use is very rarely. I still see to it, that my test suite runs fast enough though. I am not involved too much writing client code for services though, so that I can work this way may be an aspect of the kind of programming I do.

          Does that make sense?

          It finally clicked when I got an introduction to TDD in a Kent Beck style red-green-refactor workflow. Now I am a TDD zealot :) Forever grateful for the colleague who showed me the good way.

          Also https://www.youtube.com/watch?v=Xu5EhKVZdV8 taught me a lot and brought me on a better track to unit testing I think.

      2. 2

        It’s good to see more research into practices/testing! I do think it’s a bit strange to say that this is how unit testing “affects” codebases though - this is only looking at correlation, not causation.

        1. 1

          I wonder if it would be possible to use version control history to see if there’s a difference between test-first and test-after codebases.

        2. 1

          I love the format of this good! It’s nice to see people trying to make arguments with data and self-critique

          1. 1

            How do you measure cohesion?

            1. 1

              Interesting findings. I might have missed something but did it actually establish causation? If less parameters and more tests tend to occur together then it might be the case that it’s easier to test methods with fewer parameters hence they’re tested more frequently. In general if A correlates with B then 1. A causes B, 2. B causes A, 3.there’s C that causes A and B.

              Im curious what experimental design should be used to establish a casual link between testing and various code outcomes. Any papers you would recommend on the topic?