I see a threshold problem here. I am mildly sympathetic to the view that a simple and readily available solution has merit over “doing it right the first time”.
Still, there is a lower bound. Let me illustrate: I recently visited a german client that wanted to implement full text search. Their main DB is MongoDB, so they were suggesting to use their Fulltext indices. Sadly, MongoDB Fulltext is unfit for any serious use. While the index implementation is alright, the whole text processing breaks on trivial cases (Umlauts and other diacritics for example, which aren’t properly downcased), is very inflexible (there is a standard stemming and tokenization implemention, with no options to choose) and the upper ceiling comes very fast. The documentation is unclear about what exactly works and what doesn’t. We basically hit it during an hour of evaluation. In it’s current state, we determined it to be unfit for any use. All the problems we found also squarely land in the “it doesn’t crash, it just doesn’t show up”-category, which is quite hard to debug.
Problem: if you are not into full text search (or any complex problem) and don’t know a lot about natural language (another complex problem), how do you evaluate “worse”? On which scale? Assume you neither know what a good solution is, or how to evaluate the badness.
The proper solution is to read up on the problem space and then come up with a solution, but that approach doesn’t fit the “worse is better”-idea.
I doubt that “worse is better” has any predictive power. It’s mostly a post-hoc rationalisation.
This seems like a specious argument that could be applied to any language or technology you happen to not like (i.e. the author’s argument that Scala is “worse is better”).
The portfolio analogy is wrong; it’s only true in financial markets because of supply and demand. If investing in company A generates poor returns, fewer companies will do so and the price will drop until it becomes proportionate. No such thing happens with “investment” in cutting-edge technologies that don’t actually work; if 9 companies try a big rewrite in technology du jour and go bankrupt doing so, it’s still just as expensive for the 10th company to do the same.
Software is subject to a more pure form of evolution than other professions, because the overheads of a software company are so low. An aircraft manufacturer can afford to forego a few months or years of incremental improvements while they work on a completely new design - incremental improvements wouldn’t do any good when the overhead of producing a new aeroplane model is so large - so they might as well go back to the drawing board for that model. A tech startup that stopped updating their software for a couple of years would be eaten alive. (Put another way, the appropriate development methodology changes depending on the cost of the release cycle. The longer your test feedback loop is, the more up-front design is appropriate. But in software that loop is going down to zero).
Worse is still better, and I can draw a direct correlation in the companies I’ve worked for between how much they embraced this and how successful they were.
His pain is authentic but I am unconvinced that his solution is practical.