I’m not sure I would phrase the web and stack overflow et al as allowing developers to know 10x as much versus requiring developers to know 10x as much. The tools that could have been used to reduce accidental complexity have instead been used to cope with more of it.
This is a decent essay with a few good points that I mostly don’t believe. The (self-admitted weak) case put forth by the author about what could offer great productivity gains are not compelling because people do use those things and continue to have similar problems. Yes, the web has made information a lot easier to get. At the same time, we’re doing a lot more. It’s not just apps being shipped, it’s apps on huge, distributed infrastructures being used by a lot more people.
I did like the notion that accidental complexity is much more prevalent than originally thought. I think there’s a decent case there.
One thing that irks me is the insinuation that things like automated testing and garbage collection are new developments. The authors says that there have “been other improvements in software development since 1986”, then goes on to list distributed version control, garbage collection, and “Agile” as new-ish things. Automated testing has been around since the 1960s. Same for garbage collection. People even used them! Debugging and development books from the 1970s certainly talk about how to go about using these things. It’s just that it was really, really expensive, so it wasn’t widespread (“We ain’t got time to test!”). The key difference has been the advancement of hardware and the ability to actually do it.
Indeed, I can’t imagine Brooks’s team made OS/360 without automated testing. And I did plenty of it at Apple in the early 90’s without anyone thinking it was some kind of genius innovation.
I liked the essay, but I agree with other lobsters that the mentioned technologies - the World Wide Web and automated testing - do not lead to 10x improvements. 10x is huge - Stack Overflow is useful, but do we really believe that one developer with today’s StackOverflow will generally beat a team of, say, 9 developers and a “librarian”? Or one developer with today’s testing tools versus a team of, say, 5 developers and 5 (manual) testers?
This is absolutely worth reading. I think the discussion of how no silver bullet gets misinterpreted is spot on, and I agree that the author is onto something that Brooks underestimated the amount of accidental complexity that remained.
I’m convinced that statically typed functional programming is not an order of magnitude improvement, though I think it plausibly could be a smaller improvement.
More generally, I think this post go wrong by looking for discrete choices that are themselves orders of magnitude improvements. Instead, I think there will be dozens of improvements, none of which are individually anywhere close to an order of magnitude, but can be multiplied together into an order of magnitude improvement.