Any sort of technical dogma is bad in the extreme but this seems like throwing the baby out with the bathwater. What’s strange is it seems like the author realizes this. In one breath we get
[unit tests] are the tests capable of driving the code design (the original justification for test-first).
and in another
Maybe it was necessary to use test-first as the counterintuitive ram for breaking down the industry’s sorry lack of automated, regression testing
Why is anyone surprised that misapplying a practice (test first as regression testing) leads to bad things? The complaint about mocking and service objects seems to be in the same spirit. TDD doesn’t mean breaking your back to shoehorn your integration tests into unit tests, mocking is a pain because it’s supposed to be. The difficulty of these things is supposed to motivate a different approach to design that doesn’t require you to mock everything and its cousin. This is like complaining that seat-belts make it difficult to jump through your windshield, that’s kind of the point.
[Comment removed by author]
Indeed, it seems that the thing people always forget is that a tool is only as valuable as the benefit it provides, and that benefit depends on context. TDD and BDD, is a valuable tool, it helps me hash out what I want my code to do so that I can have a clear picture going in. So do types in languages with good ones – I don’t TDD as hard in Haskell, in fact, I don’t test extensively in Haskell, the type system is there for me and gives me the confidence that I need. That’s the rub of it, TDD/BDD/Static Types/whatever, it’s about knowing (or feeling like you know) your code is correct. It doesn’t, and shouldn’t, mean anything more than that.
TDD can provide a set of test cases that help prevent regression, but I still have a spec/regressions in my test suite for the ones I miss. Static Types can eliminate a lot of bugs, but I still write some quickcheck and HUnit tests. Agile processes are valuable for encouraging rapid development, good adaptation skills, and good crisis management, but I still rely on SOP’s and documentation for areas where it’s appropriate. The long and short of it is simple – do what works, not what people tell you works. Try things honestly, evaluate them rationally, and use them appropriately.
For reference, here’s the latest keynote that DHH presented at railsconf 2014
Given what I’ve heard about the pitfalls of outsourcing, I wonder why outsourcing test implementation hasn’t taken greater hold in corporate America? All one would need is to have two independent companies, one of which writes the tests and the other which is tasked with “grading” a subset of those tests, the testability of the code to be tested, and making recommendations for refactoring. If a contract can be written to make the profits of those companies dependent on their performance, this system could eventually be made to work well. It would take some doing and building of relationships and expectations. It would not be simple, but it could work.
Such a task would fit what I’ve heard about the work patterns, and even the work pathologies, of outsourcing companies in India. It’s a suitable task for 1st year hires. Such companies are motivated to produce large amounts of output, but can also be directed through proper incentives to produce decent output. This would be particularly true if compensation structures included a disincentive for “dead” tests that never found errors.
And yes, such a system can be gamed. Any system can be gamed. There is no substitute for building relationships and working relationships amongst good people that want to produce good results.
In my experience, writing a good test is more difficult than writing a good implementation. Perhaps, it can be the other way around?
I’ve found that the difficulty of writing tests depends on several factors, three of the most important (in my unscientific experience):
The first is largely dealt with by breaking down the problem further, though sometimes you simply cannot – problems which are hard to verify are – indeed – hard to test (which is essentially just verification by another name). In these cases I think your experience is probably not observed, in that writing both halves of the ‘problem’ of writing tested code is difficult. More commonly I suspect that #2 and #3 bite you (as it does me), in particular, #2 is a very difficult problem, because if I take as an a priori requirement that a given API be supplied, then it is the case that writing the implementation might be utterly trivial, but mapping it to a particular API takes several intermediate steps. Take, for instance, the standard MVC architectural pattern. We have at least three layers of abstraction between the client-observed API (the “V”), and the storage-API (the data store behind the “M”), Trying to write code that tests the V and involves the M is quite hard, and usually brittle, and therefore quite painful. The mitigation strategy involves either confining tests and assertions to V (e.g., using something like capybara to poke a UI and then observe changes in the UI to confirm the behavior expected), or using mocks and the like to ensure that stuff that doesn’t result in UI changes at least gets the right set of messages sent down the line.
Fundamentally those are just hard problems to solve, but the tests doubtless add value – when they work. The do function as regression catchers (in a limited sense, particularly when refactoring), and they also serve as a way to work through building out the set of API transformations that turn a click into SQL. Depending on your team, your domain, and your preference, sometimes tests make the most sense to provide this ability, sometimes types do, sometimes QA people do – each has costs and benefits. Tests – I think – stand in between types and QA. Types have a lot of power to make static assertions and prevent you from changing assumptions on the fly – and after all, programming is just managed, repeated assumption and assertion – but they also can constrain you when refactoring if not well designed, preventing you from making changes until all the details are worked out. They can force you into local minima wrt. complexity. QA people, on the other hand, allow you to freely change code, and often ‘sneak past’ changes which technically break expected behavior, especially unintentionally. They also are open to human error in a way Types and Tests are not. Tests are a nice middleground, allowing static assertions but not necessarily tying you to ensuring every behavior is preserved as it was when refactoring (that is, they allow you to cut corners, like with QA), but at the same time they can be difficult to write, maintain (especially when they’re written poorly) and – especially – they can be difficult to believe. That is to say, it can be difficult to know that a particular test, especially one with mocks, is actually testing stuff. There are tools that help to address this problem in some areas, but it is a real problem.
I guess my beleaguered point here is that I think writing a good test can be more difficult than writing a good implementation, but it’s not necessarily the case, nor is it correlated – that is, a bad test can effectively test a good implementation, a good test can effectively test a bad implementation, and so on. The point of the test is to verify in some mostly-static sense that your assumptions and assertions are correct.
 I hate this term, or – at least – have come to hate it. Software is not ‘powerful’, it is sometimes ‘capable’ and often ‘incapable’, but capability and power are different concepts and it’s hard to say that something is narrowly capable using the language of ‘power’. For instance, a DSL can be very capable (and should be, by definition), but it is not necessarily “powerful” (in the sense of ‘power’ as being the ability to do (general) work, rather than specific work).
Response from uncle bob - http://blog.8thlight.com/uncle-bob/2014/04/25/MonogamousTDD.html