Obviously I’m not a Fan of TDD, however I’m also not a fan of waterfall. All those studies were against the waterfall model. My opinion is that three is a 3rd way.
Should’ve been your original comment. High-signal, low-noise sites prefer those with specific claims backed by evidence. It’s how my account survived countering fads and popular BS on Hacker News among other places. Actually got upvotes more than graying out. Then, your quip can be the intro to that info or the conclusion at the end.
Btw, you might find it interesting that the waterfall model never even existed far as tech people go. It was actually a paper on iterative software development with amazing insight for time that invented waterfall as an example of what not to do. One man’s misappropriation of that graphic to executive or management types created all the bullshit that followed. It was “dang” on HN that gave me other work in the comments that showed the guy responsible. He apologized but damage was done and battle against it continues. Unreal, eh?
Yes! I think the lesson with waterfall is, people never read your work beyond the first page. Waterfall became a reality when NATO and US military made it required practice. So sad.
I updated my comment above, I hope it’s to your satisfaction. Also I was the first and only person to engage. If anything hopefully it will lead to an ACTUAL discussion ;-) Note it usually takes a bit of emotion to get people engaged.
I love that the article brought up property based testing. More people need to know about which is why I up-voted the article.
That’s a great contribution to the site, thanks :) Appreciate you digging up some good research.
Your first research link appears to provide overwhelming support for the hypothesis that TDD is a helpful practice in software development (which is confusing since it appears to contradict your point; the intellectual honesty is quite refreshing)
Personally I’m a much bigger fan of integration-TDD (which I usually use) than unit-TDD (which I only occasionally use) - I’d agree with your blogpost claim that unit-TDD can result in lots of single-use modules.
On the flip-side of this, the experience of writing bad tests and then maintaining the result was formative - much of what I understand today about testing came from the work I did as a junior.
This is a sort of well-known folk trick for property testing. You need to design your parameter distributions to hit “small” values often. In this case, as soon as zero is a reasonable point in parameter space you should explicitly note that by providing positive probability weight to it:
For example, the quickcheck implementation for Erlang will start generating small” values first and move up to larger ones. The definition of “size” depends on the values generated, but for integers and floats is proportional to their absolute value. The initial tests are generated with a size of zero, which ensures the usually special case of inputting zeroes in different places in the test is covered. Same as for empty lists, empty strings, etc
Is definitely a bit ad-hoc, but effective in practice
That makes more sense. Thanks. I used to do something along those lines with numerical tests. I tried a positive number, a zero, a negative, one at minimum/maximum, and one past that. Even if it should fail, I still wanted to see how it failed. I don’t know if there is a name for that or if it has its own sub-category of research. I just hacked it together to avoid state explosion hoping something would come out of it.
I had been involved in a project regarding property based testing in Erlang (you can get some referendes from this report). There we explored several approaches to describe tests in higher level than the common test case approach. The ones that I was more interested on were quickcheck and model checking. Both of them try to express a property of the software (e.g. for all integers A and B plus(A, B) must be equal to plus(B, A)).
Model checking (implemented for Erlang in that project as McErlang is more powerful but quite hard to implement and needs a lot of trickery to cope with the problem space explosion. Quickcheck tries to take the pragmatic approach and just randomly samples the search space. Then the difficulty turns into sampling efficiently. In practice, quickcheck discovers a lot of discrepancies between the specs and the implementation without little tweaking.
One of my favourite results is reported in another paper for which I couldn’t find a non paywalled copy; they found a bug that had been lurking for quite long in a core Erlang library (and that was crashing production systems every now and them) with a rather simple test spec: Testing a database for race conditions with QuickCheck
A talk explaining how they found a bug in Riak (that actually was present in the original Dynamo paper)
Any adventure with TDD is a misadventure.
EDIT: In response to Daniel.
Some research: http://biblio.gdinwiddie.com/biblio/StudiesOfTestDrivenDevelopment https://pdfs.semanticscholar.org/ebfd/1d5422a12e8d2bbaae8392300dd4ed2d552e.pdf
Obviously I’m not a Fan of TDD, however I’m also not a fan of waterfall. All those studies were against the waterfall model. My opinion is that three is a 3rd way.
A more lengthy opinion by me: https://mempko.wordpress.com/2010/08/15/theory-of-relative-dependency-and-tdd/ Not just my opinion: http://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
Should’ve been your original comment. High-signal, low-noise sites prefer those with specific claims backed by evidence. It’s how my account survived countering fads and popular BS on Hacker News among other places. Actually got upvotes more than graying out. Then, your quip can be the intro to that info or the conclusion at the end.
Btw, you might find it interesting that the waterfall model never even existed far as tech people go. It was actually a paper on iterative software development with amazing insight for time that invented waterfall as an example of what not to do. One man’s misappropriation of that graphic to executive or management types created all the bullshit that followed. It was “dang” on HN that gave me other work in the comments that showed the guy responsible. He apologized but damage was done and battle against it continues. Unreal, eh?
https://www.cs.umd.edu/class/spring2003/cmsc838p/Process/waterfall.pdf
https://news.ycombinator.com/item?id=10927241
EDIT: Decided to submit that as a story since people might find it interesting.
Yes! I think the lesson with waterfall is, people never read your work beyond the first page. Waterfall became a reality when NATO and US military made it required practice. So sad.
You sure? What’’s your data on this? I try to collect such stuff to piece together the history where I can.
Try to contribute some substance to the discussion, this isn’t reddit.
I updated my comment above, I hope it’s to your satisfaction. Also I was the first and only person to engage. If anything hopefully it will lead to an ACTUAL discussion ;-) Note it usually takes a bit of emotion to get people engaged.
I love that the article brought up property based testing. More people need to know about which is why I up-voted the article.
That’s a great contribution to the site, thanks :) Appreciate you digging up some good research.
Your first research link appears to provide overwhelming support for the hypothesis that TDD is a helpful practice in software development (which is confusing since it appears to contradict your point; the intellectual honesty is quite refreshing)
Personally I’m a much bigger fan of integration-TDD (which I usually use) than unit-TDD (which I only occasionally use) - I’d agree with your blogpost claim that unit-TDD can result in lots of single-use modules.
On the flip-side of this, the experience of writing bad tests and then maintaining the result was formative - much of what I understand today about testing came from the work I did as a junior.
This is a sort of well-known folk trick for property testing. You need to design your parameter distributions to hit “small” values often. In this case, as soon as zero is a reasonable point in parameter space you should explicitly note that by providing positive probability weight to it:
Uh, can you link to a simple example of that applied to real code so the less mathematically inclined can see how it works?
For example, the quickcheck implementation for Erlang will start generating small” values first and move up to larger ones. The definition of “size” depends on the values generated, but for integers and floats is proportional to their absolute value. The initial tests are generated with a size of zero, which ensures the usually special case of inputting zeroes in different places in the test is covered. Same as for empty lists, empty strings, etc
Is definitely a bit ad-hoc, but effective in practice
That makes more sense. Thanks. I used to do something along those lines with numerical tests. I tried a positive number, a zero, a negative, one at minimum/maximum, and one past that. Even if it should fail, I still wanted to see how it failed. I don’t know if there is a name for that or if it has its own sub-category of research. I just hacked it together to avoid state explosion hoping something would come out of it.
I had been involved in a project regarding property based testing in Erlang (you can get some referendes from this report). There we explored several approaches to describe tests in higher level than the common test case approach. The ones that I was more interested on were quickcheck and model checking. Both of them try to express a property of the software (e.g. for all integers A and B plus(A, B) must be equal to plus(B, A)).
Model checking (implemented for Erlang in that project as McErlang is more powerful but quite hard to implement and needs a lot of trickery to cope with the problem space explosion. Quickcheck tries to take the pragmatic approach and just randomly samples the search space. Then the difficulty turns into sampling efficiently. In practice, quickcheck discovers a lot of discrepancies between the specs and the implementation without little tweaking.
For selected references: