This is a look into a time when infrastructure was perceived differently. Note how many compilers were written! It’s also a cautionary tale about veering too far from ‘industry-standard’ tooling. Big organizations always have a political element to them, and technical excellence is usually not politically viable.
My best hope at this point is that the dotcom crash will do to Java what AI winter did to Lisp, and we may eventually emerge from “dotcom winter” into a saner world. But I wouldn’t bet on it.
We appear to be repeating history already here. Must we always suffer network effects? Do we really need loads of social proof for the tools we use day in and out?
From the article:
The problem with [the argument of using ‘best practices’] is twofold: first, we’re confusing best practice with standard practice. The two are not the same. And second, we’re assuming that best (or even standard) practice is an invariant with respect to the task, that the best way to write a word processor is also the best way to write a spacecacraft control system. It isn’t.
His Google saga (briefly mentioned in the poster’s link) is also worth a read: http://blog.rongarret.info/2009/12/xooglers-rises-from-ashes.html