The way I fix bugs is to write a test that exercises the bug and asserts the correct behavior.
I do this in any language – Haskell, JS, whatever. Doesn’t really matter.
Writing tests upfront? Depends on what I’m doing. I can get away with not writing a lot of obvious tests, but if it seems tricky I may write upfront tests anyway.
For example, what class of errors can we detect with tests, and perhaps as importantly, which do we actually catch? I’ve seen codebases littered with assertions and tests which verifies that something explodes in the expected way if some constructor receives a null argument. OK, are unit tests the best way to approach that? Even short of using a language with no nulls, could we do better by adopting some form of gradual typing? (A solution which may actually apply to JS for example).
Let’s move on and assume that we’ve eliminated the class of errors (somehow) which could be categorised as “simple mechanical” such as the previous. We’ll never get invalid types of things, or non-things, passed to functions which expect specific things. Now we might be in to places which are more awkward, so what does happen to my address validation function if I pass it a string of length 216? An empty string? A string containing control characters? A string containing unusual unicode planes? We could reach for unit testing again, but could we ever really feel like we’ve written enough tests? No, something like property based testing is much more likely to be valuable than our manual efforts.
Now we’re probably in to the world where “simple and complex mechanical” errors are removed (to some degree). Now, operational. Are we ever likely to catch complex concurrency errors with unit testing? Probably not, most people assume unit tests to be isolated, single threaded, etc. and so by definition we’re not going to be finding races or interesting edges. That’s really got to be some level of integration/system test.
What does this leave us with? Well, we’ve still got actual business logic of course. Unit tests now then? Hmmm. Maybe… Who’s writing them? The same person as writing the code? OK. Am I likely to come up with the logical inconsistencies to write tests for them, if I didn’t when writing the code i’m testing (regardless of the order I write the code/tests!) Indeed, how well do I understand what I’m building? If there’s a specification, then I’m probably better off trying to move things back towards mechanical verification of compliance (perhaps even proofs in tractable cases). If there isn’t, then what we’re testing is my understanding of the requirements - am I able to write tests which highlight my lack of understanding? Probably we reach the realm of philosophy now, but…
OK, so this is a bit tongue in cheek. Of course unit testing can have value. But I see far too much uncritical valuation of testing, particularly unit testing, without true consideration of the possibilities or flaws. Testing is hard and a lot of modern testing theory promises to reduce it to mere mechanical effort. It never has been and never will be if it’s going to have lasting value.
If there’s a specification, then I’m probably better off trying to move things back towards mechanical verification of compliance (perhaps even proofs in tractable cases). If there isn’t, then what we’re testing is my understanding of the requirements - am I able to write tests which highlight my lack of understanding?
I’ve worked on a project with dedicated BAs who could always tell me exactly what should happen in any specific scenario, but whenever they tried to write a specification it didn’t correspond to their examples. In the end I just asked them for examples and wrote the simplest code that made those test cases pass. So maybe the use case is when the domain itself isn’t formally understood?
Though it didn’t turn out to be more than a week or two of headache, I have sometimes found myself given examples by a client having the flavor of the voting paradox. There’d be examples like “if such and such happens, then Concern A trumps Concern B,” “but if so and so happens, then Concern B trumps Concern A,” and so on like that. Pretty soon we’d arrive at a result that the client found to be really unexpected and he’d point out the bug. Then we’d walk back through some examples that showed nope, this is consistent with the (implied) rules. I have come to prefer to deduce from principle rather than induce by example, if for no other reason than it seems to cut to the chase a lot faster. But, like you said, it doesn’t help if the domain isn’t really understood.
This is the same way I feel about the assurances that good type systems give. Didn’t believe it for years, felt alternative solutions were adequate, didn’t see the added value, thought they’re not worth the hassle. I’m polishing a blog post about this for tomorrow. :)
We’re tomorrow now, where’s that post? :P
More seriously, I have had a very similar road to you; for years I was a Python guy and thought that the unit tests were enough to guarantee correctness, but I kept having bugs and getting bit again and again. Then I saw this video by Yaron Minsky on OCaml where he said that an important mantra of good ML programming was to make illegal states unrepresentable. Illegal states were the cause of many problems in my own code, so I got around to understanding the value of a good type system. These days, my first reflex when creating a new data structure is not “how am I going to test this?”, rather it’s “how am I going to disallow as many invalid states as possible?” Very often this means that I my unit testing job is reduced by a lot, so win-win!
I can’t upvote that enough, and it’s what I was getting at in the first second para of my comment above. The single most important factor in the increase in quality of the work i’ve done over the last …mumble… years has been the change in focus to moving problems “in to infrastructure” where what “infrastructure” is depends on context. In the case of this, code, it’s the language itself, how can I use it to make these problems outside the scope of things I have to care about? Null reference exceptions can go away if I do X. Type errors can go away if I do Y. Etc.
The same applies to other things. How can I make invalid machine states unrepresentable? Immutable infrastructure. Ban modification. Functional provisioning. Etc. They all fall in to the heading “move problems in to infrastructure - solve them once, outside of my scope”.
As you rightly say, the first thing people should be thinking is “how can I create a world where problems are impossible” and not “how do I deal with these problems”.
Still typing. It’s coming together very soon.
Unit Tests are a cost to the customer which pays for software never delivered to them.
I wonder if increasing the cost of building software pushes more project towards failure. Which leads people to look at successful software using unit testing and concluding that “Ha look, unit tests made this project successful”.
Do you wonder why all fat people are jolly? The sad ones die early.