I like functional languages as the next dev, but I don’t think this article gives fair criticisms.
(Also, for fuck’s sake, it’s the “Diamond Inheritance Problem”, not the “Triangle Inheritance Problem”, that fucking hack.)
The contrived example of the array class claims that the author must know how the base class is implemented, but frankly the methods on the derived class shouldn’t have been explicitly calling the super’s methods anyways–at least not in the case of the addAl, which could’ve worked by simply calling add instead of super.add. The whole claim of “but but but you have to know about the super class” is only broken because, well, they decided to explicitly link to the super class. It should be obvious that implementing addAll in the derived class in ignorance of the add method in that same class is sloppy design.
The entire tags section meant to debunk inheritance is another weird strawman. There is always a good argument to be made for composition over inheritance, but such arguments should be made by appealing to actual code samples instead of messy desks.
The entire critique of encapsulation is basically nonsensical to me–again, actual code would’ve clarified things. The weird rambling about references and resources doesn’t really parse for me: if the argument was meant to be that referential transparency is more performant or safer in concurrent environment then let’s have that discussion, but generic FUD about pointers and references is just not really that great.
The entire polymorphism “argument” is a picture of the three stooges, and then a rambling anecdote (again, without code) about how this somehow meant that polymorphism is over.
Just, ugh. And then the author (probably to shill his Elm Facebook group, so he can get a book deal or resume boost) says that -=~ functional programming ~=- is somehow the answer, again without explaining exactly how this is the case–or even giving broad strokes!
In the words of Billy Madison…
Mr. Scalfani, what you’ve just scribbled is one of the most insanely idiotic things I’ve ever read. At no point in your rambling, incoherent blog post was there anything that could even be considered a rational thought. Everyone in this forum is now a worse developer for having experienced it. I award you no points, and may Kay have mercy on your soul!
The vacuousness of this article is another data point supporting my belief that “software architect” is a largely meaningless title.
This whole article can be summarized as: “Some people write poor code in object-oriented languages, therefore object-orientation itself is to blame, and is entirely a lie. Functional programming is the solution, and I won’t bother explaining why.”
I have never heard anyone (who was not a 1990’s textbook) claim that “Inheritance, Encapsulation, and Polymorphism” are “The Three Pillars of OOP”. As best I can tell those three words were elevated to their status by publishers to intimidate people into buying books, lest they not know what those big, scary words meant.
Object orientation is primarily about message passing. Don’t believe me? Listen to Alan Kay instead:
I’m sorry that I long ago coined the term “objects” for this topic because it gets many people to focus on the lesser idea. The big idea is “messaging”.
While every buzzword-derived explanation will have flaws, I always introduce people to object-oriented design via SOLID and try to re-emphasize message passing at every opportunity. Encapsulation and inheritance are tools incidental to that, but are nowhere near the focus.
And that’s just the introductory section. I’m having trouble engaging with of the rest of the article, as it is at best poorly elaborated, and at worst, unintelligible. A quick attempt at point-by-point commentary:
Even where I want to agree with the ideas in this article, I cannot support the author making such sweeping claims with such poor examples and a lack of explanation.
IME, talking about pillars of OOP and principles is not very productive. OO is vague enough that it’s hard to have a real discussion about it. For example, C++ shares little in common with Smalltalk.
However, I think boiling an object down to its minimal implementation does provide a ground to discuss if OOP is valuable for solving a problem or not. An object is a tuple containing an opaque state and a dispatch table. With that, I would say an object introduces a layer of indirection (the dispatch table) that is not valuable in solving a large range of problems. It’s a complication. In most uses of runtime dispatching that I come across, the type of the object being dispatched to never changes during runtime. Instead, two types exist: one for testing and one for production/release. Those are usually different compiled artifacts anyways, so the dispatch can be done at compile time instead. On top of that, the dispatch table can make understanding a piece of code quite difficult. It depends on the rest of the language, of course, but in Java, for example, reading a method will tell you very little about what actual piece of code is executed at runtime. In other words, in my experience the type of polymorphism OO gives a developer is not valuable and makes a program more difficult to understand. There exist alternatives that solve the problem at compile-time and make understanding a program easier.
OOP-according-to-Kay is mostly Actor model sans concurrency, with late-binding. (Simula)
OOP-according-to-everyone-else is the children of Modula-3.
There, solved it. We can stop arguing about originalist and colloquial definitions of OOP.
He is right about school still teaching OOP like it was the 90’s though. My academic experience was very boring.
I had the opposite experience in university. When I went to undergrad OOP was huge as a buzzword, and C++ and Java were big in industry, but the profs really would’ve preferred to teach CLOS and Smalltalk if they could’ve gotten away with it. Those were hampered by being seen as obscure/impractical/obsolete, though, while C++ and Java were huge, so there was a lot of external pressure to teach industry-relevant, practical OOP instead of “ivory tower OOP”. Nonetheless they worked in a bit of the other stuff when possible.
One of the important lessons that I’ve learned in the past few years has been that it is almost never a mistake to start a project with a functional programming approach (if it’s not bending your language too much out of shape: don’t do this with C.) Values are simple to work with and reason about, functions are straight-forward to compose, and if you put a little effort, you can get pretty far using FP with fewer weird problems than in an OO language. Eventually, you might need a little imperative programming, either for performance reasons or because a bit of mutable state would work and you don’t want to go full pure FP. The rest of your application will still be built around values and stateless code, and should still be readily understandable.
It’s called Medium because it’s neither rare nor well-done.
Actual content: What if OOP succeeded because it admitted poor code, and then allowed a path to better code?
I’m not arguing for OOP, I’m arguing that OOP’s permissiveness in code quality helped it gain ground, similar to how C’s relatively simple spec ensured a mostly-compatible C compiler appeared everywhere.
What I got from the article wasn’t that OOP was bad, but that we are taught the wrong aspect of what makes it good. The part about containment makes a lot of sense and what should really be taught to students when they are studying OOP.
And then he goes on to say its bad and functional is the only way to go? I’m confused by his writing.
I’m 99% sure Armstrong wasn’t thinking about #includes when he made his statement about the banana in the jungle. What a dumpster fire of an article.
FYI for C++/Java programmers who like myself wanted to know what the alternative to OOP is, don’t bother reading the article, just read this: