1. 8
  1.  

  2. 2

    Hmm.

    I often speak about the spectrum of testing, from the finest grained unit tests to full system end to end tests.

    As soon as you move away from the either end, the amount of set up and dependency cutting you have to do increases.

    As soon as you move away from the Finest Grained Unit test, the fragility of the test increases. (ie. It tends to break due to changes in things other than the particular behaviour under test.)

    Therefore I advocate doing the Finest Grained Unit Testing.

    What is that? In the C world, the smallest chunk I can throw at the linker is a .o file. That sort of defines it for me.

    In the Ruby world, I usually can step down to test smaller chunks.

    The aim is to thoroughly verify the required behaviour in a manner that is robust (only breaks due to changes in the code that implements the behaviour under test), and provides excellent Defect Localization. ie. Tell me which test case failed, and I will have an excellent idea which line of code is broken.

    1. 1

      The aim of “thoroughly verifying behavior” is a good one… sometimes. Consider the example I gave of 3D rendering, though: the only thing you need to verify is that it the output looks good to a human. Or consider a prototype: you don’t need to verify the behavior, you’re just learning by doing and will then throw the code away. Automated tests, or at least fine grained tests, are probably a waste of time. The real code will get them, they’re not needed for a prototype/spike.

      If you say “well of course I wouldn’t do fine grained tests on a prototype”, well, that’s my point: you have to start with goals. As experts we tend to not consciously see the steps in our though process, so we go and tell junior programmers “finest grain testing” and then get confused when they go and do it in places where it’s obviously (to us) inapplicable. And what we start with when we decide how to test is goals: you could test a C function in isolation, but it would take too long, so you don’t. Part of your goal is “ship code on time”. So it’s important to make it explicit.

      1. 3

        I only regret testing when I don’t do it. I almost never see a prototype replaced by “real” code. I’d prefer to build tests for a “prototype” for 2 reasons: 1) it’ll grow into the real thing sooner rather than later, and 2) I use tests to ferret out my goals anyway.

        Writing microtests (and refactoring) helps me write only the production code I need. I tend to ship faster with tests. There are exceptions, but they tend to have more to do with whether the code is integrating with some external API than how low-level the language is.

      2. 1

        Mike “GeePaw” Hill has been using the term Microtests (https://www.youtube.com/watch?v=H3LOyuqhaJA) for a while now. It’s a descriptive name, and I like it. Lots of very very tiny tests which give you confidence in your code while still allowing you to change or refactor your code.

      3. 1

        I completely agree with this author that it’s really important to think about the appropriate ways to test the system you’re actually building, and unit tests are just one part of that. I think they’re a very useful part in almost all systems, but they aren’t everything and there are certainly situations where they aren’t worth it.

        I’m not really sure where the author is getting their apparent assumption that the terminology itself is bad. We still need a way to talk about different types of test, right? I do, however, totally agree that discussions which frame unit tests as the one true way should stop.

        1. 2

          (Original author here.)

          My basic argument is that current terminology (a) verges on the meaningless (“unit test” means very different things to different people) and (b) starts from the wrong place, i.e. means not ends.

          I may have overstated the case on (a), or at least not acknowledged need for shared terminology of techniques that does have consistent meaning. But… I can’t help but wonder if even the technique terminology we use would be much better if started out from goals, figured out what kind of testing that implied, and then came up with technique terminology.

          For example there’s a tendency to say “unit tests don’t include anything that interacts with RDBMS”, and so for ORM-based systems people either do mocks (really not particularly useful) or feel guilty. But… starting from a goal oriented point of view we might come up with terminology that encompasses what is really the same technique covering both traditional in-memory unit tests and those that add on a local RDBMS.

          1. 1

            I certainly find nothing to disagree with in that response, and thank you. I think the next logical step in the conversation you propose is to write some detailed examples of the goal-oriented strategy. :) I haven’t seen a lot of material on how to do that kind of thinking, and I bet it would be a useful resource for a lot of people.