We sat down and looked at our hardware, and examined our data, and thought about how to use the one to transform the other. We tinkered, and measured, and read, and compared, and wrote, and refined, and modified, and measured again, over and over, until we found we had built the same thing, but 10 times faster and incomparably more useful to the people we designed it for. And we had built it by hand.
I used too much slow software in the 80s to really believe in this lost golden age, but more importantly:
OK, you’ve identified a problem in software. Assume it is not the result of total incompetence on the part of everyone that is not you and your friends. What does accepting this problem get people? What’s the tradeoff? Extend a little charity.
I think the answer is probably cost: having devs take the time to do all the things described above takes time. When software’s not inventing totally new ways to do things, it’s making users far more productive than they would be, say, shuffling papers around. It’s more valuable to get more functionality covered by slow software than it is to make a marginal improvement to users' efficiency with existing software.
If the authors think they’ve found low-hanging fruit that gives them performant software at an attractive cost, great, say that. (I’m thinking a lot about that in terms of correctness and reliability.) But pining for a mythical lost golden age is bad rhetoric and worse philosophy.
I found the article interesting and inspiring, but I don’t think I’m any the wiser…
I think it is meant to be a rally cry/rally point rather than educational. Attaching a name to an idea gives people a place to focus their enthusiasm.
I can’t upvote this enough.
I used too much slow software in the 80s to really believe in this lost golden age, but more importantly:
OK, you’ve identified a problem in software. Assume it is not the result of total incompetence on the part of everyone that is not you and your friends. What does accepting this problem get people? What’s the tradeoff? Extend a little charity.
I think the answer is probably cost: having devs take the time to do all the things described above takes time. When software’s not inventing totally new ways to do things, it’s making users far more productive than they would be, say, shuffling papers around. It’s more valuable to get more functionality covered by slow software than it is to make a marginal improvement to users' efficiency with existing software.
If the authors think they’ve found low-hanging fruit that gives them performant software at an attractive cost, great, say that. (I’m thinking a lot about that in terms of correctness and reliability.) But pining for a mythical lost golden age is bad rhetoric and worse philosophy.
So are we only supposed to write in machine code? I don’t quite understand what the author is advocating here…