Here’s why tautological tests are OK. They aren’t necessarily a tautology in the time dimension. As optimizations are added and refactoring happens and bugs are introduced that tautological unit test might catch important regressions.
I once wrote a test to ensure generated IDs did not collide because collision would introduce catastrophic data loss. I received pushback on this test because the original test was essentially a tautology. It wasn’t long before the unit test failed because someone broke the Algo during an overzealous refactor.
Never calculate an expected value to check against within your test.
Maybe I’m being pedantic but I think this statement is too heavy handed. One case where I’ve used this to success was during a university project where I had to program an algorithm to some some specific problem which could use some nice DP and some other neat tricks. Although I could generate the test cases and calculate the resulting values by hand, each one would take about 5 to 10 minutes because of all the steps and thinking I had to go through. Instead, I wrote a brute force solver that could not scale more than 35 items. I randomly generated thousands of inputs and ran them through the naive brute forcer to get some outputs. From there, I tested the super optimized algorithm against the same input. In this case, I’m not really comparing a function against a truth, but that two functions behave identically, even if implemented differently, and trusted the brute force solution to be a proxy for the truth. Obviously this isn’t the only test that should exist but I derived a lot of value from generating solutions like this.
As an anecdote, in some code I wrote recently, in several instances I had the opposite experience. I hand did some calculations, put those in the tests and the tests failed. When I did a deeper check I finally realized that my hand done calculations were wrong and the code was correct.
Here’s why tautological tests are OK. They aren’t necessarily a tautology in the time dimension. As optimizations are added and refactoring happens and bugs are introduced that tautological unit test might catch important regressions.
I once wrote a test to ensure generated IDs did not collide because collision would introduce catastrophic data loss. I received pushback on this test because the original test was essentially a tautology. It wasn’t long before the unit test failed because someone broke the Algo during an overzealous refactor.
Maybe I’m being pedantic but I think this statement is too heavy handed. One case where I’ve used this to success was during a university project where I had to program an algorithm to some some specific problem which could use some nice DP and some other neat tricks. Although I could generate the test cases and calculate the resulting values by hand, each one would take about 5 to 10 minutes because of all the steps and thinking I had to go through. Instead, I wrote a brute force solver that could not scale more than 35 items. I randomly generated thousands of inputs and ran them through the naive brute forcer to get some outputs. From there, I tested the super optimized algorithm against the same input. In this case, I’m not really comparing a function against a truth, but that two functions behave identically, even if implemented differently, and trusted the brute force solution to be a proxy for the truth. Obviously this isn’t the only test that should exist but I derived a lot of value from generating solutions like this.
I agree. That statement also seems to exclude things like property-based testing, which probably isn’t actually the intent.
As an anecdote, in some code I wrote recently, in several instances I had the opposite experience. I hand did some calculations, put those in the tests and the tests failed. When I did a deeper check I finally realized that my hand done calculations were wrong and the code was correct.