I found the subjective and anecdotal evidence in this a little unconvincing. The bravado (“I worked at…”, “I had written…”) also doesn’t inspire much confidence in the author.
Suffers from much the same problem as every rewrite story. They failed, I succeeded, therefore some random decision I made is responsible. I will tell you which decision you should think it was, but not provide sufficient information to verify my assessment. In particular, I will tell you about the stupid decisions made by those other morons, but not explain their reasoning.
100 times this!
Experience inspires more confidence in my heart than theorems do, though of course the author could be making everything up. It’d be nice to have more detail, though.
For example, by “purely functional” Prolog, do they just mean that they didn’t use assert and retract (making it stateless), or did they additionally restrict their use of Prolog predicates to a functional pattern, where backtracking was banned and one of the arguments of each predicate was used as a “return value”, and the others were always bound? If backtracking wasn’t banned, was cut banned? How about negation?
In its current form, this article simply says, “Wakalixes actually do matter. They actually do allow you to program better, faster and more cleanly.” Reading it will probably not help you to be a better programmer, unless you happen to be committing one of the beginner’s blunders the author calls out in the article, and even in that case it doesn’t tell you what you should be doing instead.
A problem with the credibility of the author’s claims for the wakalixes is that it’s hard to separate claims for the effectiveness of a language, or even a programming paradigm, from the effectiveness of the programmers who were programming in it, and especially the effectiveness of the social environment they’re embedded in.
Presumably if the 250 Java programmers working for the big-six firm couldn’t figure out how to reimplement a neural-network image classifier that one person implemented “on spec” after ten years, it’s not because they weren’t programming functionally — as the author pointed out, you can program functionally in Java; it’s because they weren’t making progress, probably because of mismanagement. (Or it might be because the author was just extremely lucky and chanced on such a great set of system parameters that 2500 person-years wasn’t sufficient to chance on it again, but I doubt it.) In ten years you can learn a lot about neural networks and image processing, and you can try a lot of different things. With a reasonable neural-network toolkit, which should take less than a year to build, you should be able to try about five or six million carefully-thought-out programming experiments.
A more likely culprit there is that they tried to plan out the solution of a problem that they didn’t know how to solve, which is a mistake the people often make even when they know better. As our own michaelochurch said, “Industrial management has a 200-year history of successfully adding value by reducing variance, because in a concave world, low variance and high output go together. In a convex world (as in software) it’s the opposite: the least variance is in the territory where you don’t want to be anyway. Convexity is a massive game-changer. It renders the old, control-based, management regime worse than useless; in fact, it becomes counterproductive.”
What would that kind of mismanagement look like in this case? Mismanagement by variance reduction here would probably involve optimizing the process to improve the chance that any given attempt would succeed, by putting lots of programmers on it and giving them lots of time, with the consequence that maybe in ten years they investigated three or four things that didn’t work, instead of five million.
Trying five million experiments isn’t enough, of course. You have to focus your efforts on things that might work, and learn as much as possible from each experiment. But speeding up the process of trial and error is a huge advantage, not just because you get more trials in, but because, due to hyperbolic discounting, the lessons of a quick experiment are much more memorable than the lessons of a slow one, in more or less direct proportion to their speed.
Also, if most of your experiments are going to fail — as they should, to maximize variance and the chance of netting a unicorn — experiments that fail quickly are much less demoralizing than experiments that take a long time to fail. (And of course you want to minimize people’s incentive to whitewash the results. Especially people with high prestige. For example, failed experiments often drag on for years because managers are afraid of losing headcount.)
As this process continues, how do you prefer to focus resources to the more promising exploration directions, while continuing to devote substantial effort to the dark horses? In a sense, the traditional CYA management approach errs in the direction of overfocusing on the most promising candidates. It turns out there is actually a bunch of applicable research, some of which, like multi-armed bandit algorithms, actually is being applied at some companies to the problem of managing R&D and has a robust management research literature, while other parts, like A* search, is overlooked, as far as I can tell.
And all of this has only a limited amount to do with your programming paradigm. You can iterate quickly in Java, you can iterate quickly in assembly, and you can iterate quickly in Haskell. The obstacles are different in each case, but you can do it.
I agree with kragen: I’m fine with this sort of argument being based on experience; what else could it possibly be based on?
I have another problem with it. The author claims to have been working in computing for… well, I don’t feel like trying to track down their LinkedIn as they suggest, especially since their name doesn’t appear to be anywhere on the site itself, but presumably “longer than you’ve been alive” is supposed to mean several decades at least. I resent both the assumption and the ageism there, but I suppose that’s irrelevant.
But anyone with that much experience is going to be able to solve much harder problems than people with dramatically less. Even the author themself would have to do serious introspection to have any confidence that their efficacy is due to the choice of paradigm rather than to the experience. And, frankly, I don’t believe it - I’m confident that, all else being equal and apart from any difficulty caused by being annoyed about it, the author could do these dramatic rewrites in any paradigm.
Also, of course, they don’t say anything about how maintainable people found their rewrites, after they’d moved on to the next one. The reduction in number of lines is suggestive, but it seems like we’re supposed to take it for granted that these were substantial improvements, when all that’s really being claimed is that they were successful replacements.
I’m a big fan of functional programming, and actually for a lot of the reasons the author alludes to. They just haven’t demonstrated a connection.
Perhaps more interesting is that someone with several decades of experience never worked on a project that failed…
We learn a lot from failure. Perhaps most importantly, we learn what to learn from our successes. I worry that someone who has never failed doesn’t know why they succeed.
It’s unfortunate that mostly only successes get written up. There’s a lot of selection bias in the stories we read.
It’s unfortunate that mostly only successes get written up.
In some engineering fields failures do get written up extensively, more than even successes, but in others I agree with your assessment. Failures in aerospace and civil engineering especially get a lot of study, partly because regulations require a detailed investigation, and partly because they’re spectacular enough to captivate public attention. Things like the Challenger explosion, the Tacoma Narrows bridge collapse, Apollo 13, the Titanic, etc. are probably as famous as any successes in those fields, and far more pored over by both scientists and popular documentaries. (Engineering curriculum design includes a lot of this kind of history postmortem study, too.)
Is there a list of canonical interesting failures in computing? The Intel division bug is probably the one I’ve seen mentioned most often in that regard.
There are a few canonical examples of failed software projects I remember, probably from a software engineering course. The definition of failure varies, from just eventually being killed before releasing/being deployed to being obscenely late and over budget, to being deployed but having costly and/or dangerous bugs.
The ones I remember off the top of my head were the Ariane 5 rocket, the THERAC-25 radiation therapy machine, and the Denver airport baggage handling system. Those were all old enough to be in a textbook 15 years ago, though. I wonder if there’s a good collection of newer ones, in particular of the kind that cause failures in large distributed systems.
The big fail that I recall is the Chrysler payroll system
In that it was heralded as a king of XP -> agile, whatever and then just dropped into nothingness without a good failure writeup, just that wiki page that sort of acted as a living document of folks asking “what happened?”
I don’t think the CCC failure was particularly unusual; the old figure was that about ⅔ of big software projects like that fail. One unusual thing about it is that it was that, due to their focus on incremental delivery, it had already been deployed and was doing a substantial fraction of Chrysler’s payroll before being canceled.
what else could it possibly be based on?
I’d feel more comfortable with the article if:
You could read the exact same article from you average, OO enterprise veteran technical architect.
I’d also be super interested to see how the author’s co-workers viewed his work. In my experience this type of humblebrag comes from your run-of-the-mill hero developer.
That’s fair. I agree with all of these points.
The author claims to have been working in computing for…
The email address on the contact page suggests the author is Douglas Michael Auclair. In a former life (?) he maintained Marlais, the Dylan interpreter.
He’s here on LinkedIn.
Thanks. sigh To be clear, I do not doubt his anecdotes, as far as they go. It was definitely jarring to realize there was no “about the author” anywhere on the blog, and yet he was making that appeal. But it’s not the kind of thing someone would bother to make up.
I agree with your statement. Is such way of promoting functional programming really needed nowadays? Let’s all consider that no matter how many valid and provable arguments you will provide to an established (imperative OO C++ community toxicity) status quo, the only way you are really going to sway them to your side, is by producing code that outperforms their solutions, both in terms of developer scalability and actual execution of the resulting binaries. And it is proven that this can happen, so why are we going again over this through personal viewpoints?
The problem we are having with modern functional programming languages is that because legacy companies base their success on legacy code written in legacy languages, they need to maintain the counterproductive rhetorics around. This is a very twisted side-effect of inertia, we should not feel compelled to reply each time, anymore.
edit: typos, more clarity :)
Unfortunately, I find that language/approach evangelism is hard to do in a principled, scientific way, because it fundamentally isn’t science in most cases. It’s business. And if you stick to conservative arguments supported by evidence (evidence from experiments that you’ll almost never be allowed the time necessary to perform, so you’ll have to use what’s already on the ground) then you’re often going to lose against an opposition inclined to dishonest arguments (e.g. “if we use Haskell instead of Java, we won’t be able to hire anyone!!!!111”) and phony existential risk.
The OP, at least, can convincingly tell a personal story of success that he owes to functional programming. Is it scientific proof of the “superiority” of FP? No, of course not. It’s still much more useful than a lot of what pops up in the discourse around PL choice.
This guy’s posts are consistently good.