1. 44
  1.  

  2. 9

    This really seems more like the tragedy of “100% slavish adherence to policy”.

    Having 100% coverage is pretty much essential if I am going to trust your library or framework as a dependency.

    For business applications, be as sloppy as you want^Wcan get away with.

    1. 17

      I don’t think I’ve actually looked at test coverage or even number of tests in a dependency I’ve used. Is it common for people to post those things? I might be too cynical I wouldn’t trust any of those things to convince myself that a library isn’t broken. It’s almost certainly broken.

      1. 2

        For new projects that I care about I usually check coverage on my dependencies, especially if it’s something like math or algorithms. Those things are comparatively easy to test and are super important to get right.

        Think of it this way:

        If you have 3 dependencies, each of which has 50% code coverage, and your application has 100% code coverage–the final artifact is only 62.5% covered.

        For serious engineering, we have to be responsible for our dependencies–or at least aware of their quality.

        1. 2

          wouldn’t it depend on the size of the app and it’s dependencies? if it were a 97 line app and those dependencies were left-pad, right-pad, and center, wouldn’t you be at 97% coverage?

          1. 2

            I usually assume that my dependencies are similar or greater in LoC to my main application–if they’re simple little things, I might as well just write them myself or use the standard library (you are using a language with a decent standard library, yes? ;) ).

            To wit: there are probably a lot more code in the Rails or Angular frameworks than there are in apps using them, ditto jQuery.

            1. 2

              Fair. I just checked the project I’m working on (in Go) and our dependencies are vastly larger than our own code.

          2. 1

            If you have 3 dependencies, each of which has 50% code coverage, and your application has 100% code coverage–the final artifact is only 62.5% covered.

            This is certainly nit-picking, but an argument could be made that your artifact does not necessarily use the full dependency.

        2. 11

          Having 100% coverage is pretty much essential if I am going to trust your library or framework as a dependency.

          The Impossibility of Complete Testing - it’s because what really matters is the input domain coverage, not code lines coverage.

          See also boundary testing for something that’s good enough.

          1. 3

            A common refrain on this theme is “100% coverage tells you nothing”, since the tests could be bad.

            I find the inverse more compelling (“90% coverage tells you that 10% of the code doesn’t have any tests”)

            1. 2

              Of course, is that untested 10% because it’s simple enough that tests are pointless, or because it’s complex enough that nobody knows how to test it?

              I don’t think coverage is a useful metric on its own. You need to know more about a project than just which lines of code get exercised by its test suite to properly judge it.

              1. 1

                I think the simple stuff should be (incidentally) covered by your integration tests, and it’s useful to see when it isn’t (often it’s a sign of dead code that can be deleted, or of a “shadow feature” that’s not officially supported but users care about).

                Anyone have examples of simple code not used by a feature? Devtool scripts come to mind.

                I don’t think coverage is a useful metric on its own

                It’s not; 100% coverage is the absence of information about untested code.

          2. 2

            Totally naive question here, isn’t really what you’re looking for 100% code coverage from the point-of-view of the public API surface? If the public API of a library is “100% covered”, I’m not interested in knowing further if there are tests covering every private method where a library developer has needed to populate a hash table!

            1. 2

              That’s not sufficient because there could be all kinds of weird cases where internally things are broken but externally it still appears (given a handful of tests) that there is complete conformance to an API.

              Code coverage is not the same as API compliance, and they are not substitutes for one another.

              1. 4

                Perhaps part of the problem is considering it from the perspective of a one-dimensional metric. Problems are more likely to arise from the interaction of related features, and it’s easy to get “100%” coverage that tests each in isolation but does not test the complete combinatoric space at all.

              2. 1

                Some testing libraries dont let you test private methods at all, which kind of makes sense.

                I remember running into this with phphunit while working on a laravel contract last summer.

            2. 3

              i have found that i can often get the biggest bang for my buck by starting with integration testing - just create a directory of input and expected output files and test over that. if you find bugs add more test files to exercise them, as well as unit tests to cover whatever (presumably subtle corner case) caused the bug.

              1. 4

                Cargo cult programming.

                1. 7

                  it’s like the dual of cargo cult programming :) cargo cult programming is when you know why you are doing something, but not how it actually works. here they know exactly how the code they’re writing works, but are unclear on why they’re doing it.

                  1. 3

                    https://en.wikipedia.org/wiki/Cargo_cult_programming :

                    “Cargo cult programming can also refer to the results of applying a design pattern or coding style blindly without understanding the reasons behind that design principle.”

                2. 2

                  Another angle on coverage to remember is benefits it can bring outside code correctness. In high-assurance safety or security, vendors are required to provide clear statements of requirements, features, source code, tests, and traceability between these. A requirement with no code is an implementation gap. Requirement with no analysis or test is QA gap. A set of code not connected to a requirement or feature is possibly dead code or a backdoor opportunity.

                  Outside high-assurance, I recommend Cleanroom’s policy on code review: let the complexity and importance of the code decide how much verification it will get.