I think that in order to apply Brooks’ arguments, we need to look at what we are trying to solve for our users. It seems to me like the counter-examples in the article are just chunks of accidental complexity, plucked from larger user-facing tasks with dominating essential complexity. I would be more convinced if there was a third counter-example of a larger feature/project that took into account the entire software engineering process (from requirements gathering to delivered product).
There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.
I do agree that we’ve made significant improvements to accidental complexity but these examples were all of many decades of improvements.
I still think that almost all of my job is dealing with accidental complexity and only a tiny percentage of my time is dedicated to essential complexity. I think in some cases we’ve created tools only to swap accidental complexity for other accidental complexity!!
I don’t necessarily agree with Brooks either, but the two specific examples in the article are things that at the time Brooks wrote the essay wouldn’t have been considered programming tasks at all. So it doesn’t seem like a very convincing refutation to me.
In 1986 both of the listed problems were considered programming tasks. The first would be considered a data warehousing, business intelligence, and networking project, which intense amounts of programming were done for. As to the second problem, inventory management projects were often gigantic software projects, and RPG III was a major programming language around then specifically designed for reports, and if you want to be pedantic about it computer visualization of data (like automated plotting) existed but was way more of a big deal.
I really like the idea of giving programmer tools the goal of reducing “friction”, in contrast to achieving speedups. It’s not always about saving time doing a task (‘increase productivity’), it’s about making you think about doing the useful task in the first place. It sounds like the core of the refutation here is the idea of productivity, which has evolved far beyond “minimize time-spent-on-task” since when Brooks made his argument.
Seems like Brooks was right that there wasn’t one single big idea (like object orientation or static analysis) with which programming would be easy, and that programs embodied hard-earned knowledge. But he didn’t predict how much of that embodied knowledge we would end up sharing–a modern language’s stdlib and runtime would be hard to imagine back then, much less everything available open source. (If anything, open source might’ve been the biggest idea to work!) FWIW, it seems almost as possible to share domain knowledge as ‘purely’ technical knowledge, e.g. we share chess engines not just databases.
Also notable to me how design of languages sort of moved in fits and starts: not all the new ideas were good ones, not all the good ideas were in successful projects. Tons of effort in hardware and implementations helped make new things work (consider the progress of GC over decades, or of the Web platform, or the continuous evolution process for many languages). To the extent changes in languages/compilers/runtimes do help coders be more productive, that represents a slog and enormous investment of person-hours over years, not a silver bullet.
And goalposts move: once some problem is solved, you might either expected to do it quicker and move on to other stuff or do some harder but somehow better variation of it. Once a big fast database was tricky; now data spread across a big fleet of boxes is kinda tricky (but less so than a few years ago!); next, who knows(!!), making ML models do your bidding might be tricky. Doesn’t mean all progress in tech is for nothing, but it does tend to mean it doesn’t necessarily translate into relaxation time for us (alas).
I think Brooks was mainly thinking about code to control hardware and pure software algorithms (remember, he worked for IBM, on operating system projects). That sort of thing is about as efficient as it can get, in most cases.
I think that in order to apply Brooks’ arguments, we need to look at what we are trying to solve for our users. It seems to me like the counter-examples in the article are just chunks of accidental complexity, plucked from larger user-facing tasks with dominating essential complexity. I would be more convinced if there was a third counter-example of a larger feature/project that took into account the entire software engineering process (from requirements gathering to delivered product).
I’m very critical of the paper but I think this is missing part of Fred Brooks’s thesis statement:
I do agree that we’ve made significant improvements to accidental complexity but these examples were all of many decades of improvements.
I still think that almost all of my job is dealing with accidental complexity and only a tiny percentage of my time is dedicated to essential complexity. I think in some cases we’ve created tools only to swap accidental complexity for other accidental complexity!!
I don’t necessarily agree with Brooks either, but the two specific examples in the article are things that at the time Brooks wrote the essay wouldn’t have been considered programming tasks at all. So it doesn’t seem like a very convincing refutation to me.
In 1986 both of the listed problems were considered programming tasks. The first would be considered a data warehousing, business intelligence, and networking project, which intense amounts of programming were done for. As to the second problem, inventory management projects were often gigantic software projects, and RPG III was a major programming language around then specifically designed for reports, and if you want to be pedantic about it computer visualization of data (like automated plotting) existed but was way more of a big deal.
I really like the idea of giving programmer tools the goal of reducing “friction”, in contrast to achieving speedups. It’s not always about saving time doing a task (‘increase productivity’), it’s about making you think about doing the useful task in the first place. It sounds like the core of the refutation here is the idea of productivity, which has evolved far beyond “minimize time-spent-on-task” since when Brooks made his argument.
Seems like Brooks was right that there wasn’t one single big idea (like object orientation or static analysis) with which programming would be easy, and that programs embodied hard-earned knowledge. But he didn’t predict how much of that embodied knowledge we would end up sharing–a modern language’s stdlib and runtime would be hard to imagine back then, much less everything available open source. (If anything, open source might’ve been the biggest idea to work!) FWIW, it seems almost as possible to share domain knowledge as ‘purely’ technical knowledge, e.g. we share chess engines not just databases.
Also notable to me how design of languages sort of moved in fits and starts: not all the new ideas were good ones, not all the good ideas were in successful projects. Tons of effort in hardware and implementations helped make new things work (consider the progress of GC over decades, or of the Web platform, or the continuous evolution process for many languages). To the extent changes in languages/compilers/runtimes do help coders be more productive, that represents a slog and enormous investment of person-hours over years, not a silver bullet.
And goalposts move: once some problem is solved, you might either expected to do it quicker and move on to other stuff or do some harder but somehow better variation of it. Once a big fast database was tricky; now data spread across a big fleet of boxes is kinda tricky (but less so than a few years ago!); next, who knows(!!), making ML models do your bidding might be tricky. Doesn’t mean all progress in tech is for nothing, but it does tend to mean it doesn’t necessarily translate into relaxation time for us (alas).
I buy it. Having read the whole thing, Brooks is not a very good book, for the most part. It has good parts.
To me, the linked article A Spellchecker Used to Be a Major Feat of Software Engineering immediately clinches it - that’s easily two or three orders of magnitude that have happened since NSB was written.
I think Brooks was mainly thinking about code to control hardware and pure software algorithms (remember, he worked for IBM, on operating system projects). That sort of thing is about as efficient as it can get, in most cases.