Every time this one pops up I get an irritated feeling that he is on to something…. but on to the wrong something, and then wish it would just go away.
Even worse, every time it comes up, it tends to get abused for justifying some bit of horrible in my world.
If he spoke about how perverse incentives arising from lost opportunity costs, and how first mover benefits catastrophically skew market response to technology…. That’d be grreat.
If he spoke about how to design to cope with that painful reality and still recover. That’d be better.
Instead, every blasted time someone tries to fix a current screw up…. “Worse is Better” is trotted out to after-the-fact rationalize the mess greed and stupidity has gotten us into, and the as a lamer excuse to leave us in the mess.
I really wish “Worse is Better” would just die and be replaced articles on YAGNI (yet), or how TDD prevents analysis paralysis, or Rich Hickley’s ideas on simplicity, refactoring, API design, or the deep evils of connascent coupling.
These sorts of explanations are a gross sort of technological determinism: “obviously what Industry chose is the better thing, because Industry chose it! What, do you think we’re all idiots or something?” It’s a disgusting appeal to groupthink and anti-intellectualism intended to steer discourse away from design flaws and onto easier realities, namely, “we chose this so it can’t be that bad!”
Maybe I’m old, but I’ve simply added “importance of good software design and architecture” to the “things I try to avoid arguing about on the Internet” list. Mostly because my opponents rarely conjure up something more than, “but, business realities!1!11”
Thanks - all of this, both of you. There is interesting and actionable analysis that can be done, and has been done, on the incentives that affect software architecture. This isn’t it.
I think I will have to respectfully disagree with your assessment here (and that of mattgreenrocks and Irene).
I usually see YAGNI, “premature optimization”, and lean methods as reasons cited for leaving us in a mess. People will always find reasons to write shitty software and justify it using selective misquoting of luminaries in the field. ¯_(ツ)_/¯
Remember, the article is never arguing for writing bad software: even the New Jersey approach as caricatured suggests a degree of planning and craftsmanship. It is, however, arguing for simplicity, and specifically against the monolithic one-size-fits-all monoliths that were popular in that community at the time.
We might argue that, in these enlightened times, the lesson is no longer relevant, and that it’s only used to justify bad practices. Doing so, though, ignores the increasing baroqueness and elitism of our industry. Doing so tends to pretend that there is no hubris in thinking that “This time, with this framework, and this language, we’ll do better!”
But the (sad and depressing) fact remains:
C and Unix beat Lisp and more featureful OSes. They will continue to do so for probably another decade or two, because they got the correct balance of engineering goals.
On the web, jQuery and PHP and vanilla JS have powered more sites in more places than the five best frameworks put together–and in ten years, they’ll still be in use after the party is over and the VC money has stopped flowing and we can no longer afford to pay developers to intellectually masturbate with statically-typed functional immutable view-model-viewmodel tree-diffing responsive coughmascript-to-ES5-compiled frameworks.
Just for the record, I was at one time a very devoted Lisp fan. I can’t reconstruct that mindset, but I at least know the arguments in its favor, and I still believe that C deserved to “win” (whatever winning means - both still exist, though more lines of code have been written in C).
I am not a student of historical minicomputer OSes, so I don’t really know whether Unix deserved to “win” against VMS and whatever the other major alternatives were, but it’s hard to imagine they were better in any way that I’d personally care about today. The features and architectural philosophies that differentiated what were, at the time, < 50 kloc projects feel at best quaint today. The Linux kernel is now 15 mloc, and honestly the parts everyone thinks of as emblematic of what Unix is, are in most cases the problem parts that need to be removed.
There was a thread here recently where several of us shared complaints about the “everything is a file” idea - not that uniform interfaces are bad, but the excessive literalness with which that has been interpreted through the decades has become a serious design flaw.
Another early Unix idea - and a genuinely novel and fascinating one, at the time - was composition of simple tools. I agree with that goal! And pipelines based on stream redirection were actually a really clever and appropriate way to do that, for the relatively restricted variety of tasks people used computers for when they were invented. But we’ve moved far past the point where unstructured text is an appropriate interchange format for most things we use computers for. In fact, it’s been a long time since there was even any attempt to make the invocation of command-line tools coherent at all; each program invents its own flag grammar, and nobody ever cleans up bad naming schemes or inconsistent semantics. Git is a popular offender but, really, most tools are at least as bad.
So I just can’t really buy the narrative that it matters that C and Unix achieved dominant mindshare. Most of the ideas in both of them date from long after their victories.
A lot of what all hand-wringing of this nature is about is really just that people are emotionally attached to the idea that a decision that felt important at the time, had a lasting impact beyond the next couple years. Pardon my cynicism. :)