I’m not convinced everybody hates it. I think a certain group of programmers dislike it a lot and are quite vocal about it, while the vast majority use it every day and don’t think about it too much. Even the name is somewhat misleading to me, in practice its really class based programming. I don’t think most programmers are sitting there being like “the problem with this project is the fundamental nature of this language”.
Technology loves these grand, sweeping statements of “this is terrible and this is excellent”. The truth is always more nuanced than that. I suspect many people hate OOP because they work in large OOP codebases every day that are hard to work in, because despite the fact that OOP doesn’t force you to write bad code. It turns out that’s not how humans interact with tools. For me especially the “drift” of the object I’m interacting with over time (customer becomes a different thing as the product evolves) can be frustrating. I bet if we all worked with large functional programming codebases everyday there would be a lot of conversations about how much we all hate that.
Programming is hard and I think it is great that we, as a community, are constantly trying to rework software to make it more reliable and usable. In my experience OOP in the hands of experts is easy to maintain and flexible. When less experienced people get involved in the codebase, it gets harder to work with. Right now what I see is smaller teams writing functional programming applications and being like “this is the right way to do things”. Let’s talk when they’ve been interacted with for 5-10 years by a few dozen people of varying levels of experience.
In my experience OOP in the hands of experts is easy to maintain and flexible. When less experienced people get involved in the codebase, it gets harder to work with.
This 100%, but more general. Any paradigm, in the hands of people who understand how to use the paradigm, will be maintainable and flexible. That’s why it’s so incumbent on people who are experienced to be good and empathetic mentors to people who aren’t experienced with the paradigm.
I’ve worked, as I’m sure many here have, with just about every programming mentality around, and none of them are any good – they’re just tools, and any tool has jobs it is suited for and jobs it is suited against. Some things naturally lend themselves to Object-based thinking, some to procedural, some to functional. I’d never want to write a OOP Theorem Prover any more than I’d want to write a procedural CMS or a functional video game. This not to say it’s not possible to do those things, but rather, I – personally – don’t find it easy to think about those things in those paradigms.
Computing oughta be a big tent, and there’s plenty of room for people to figure out what set of tools make sense for them.
PS, this goes for strong vs weak vs no typing too.
I took the title to be a fairly sarcastic comment, given the contents of the post. However, I would agree that the premise is a non-starter if meant seriously.
I think one of the biggest advantages to OOP which isn’t mentioned in the article outside of the quote he uses comes from the mental model you can use when teaching someone who has no knowledge of programming. As humans we all quite obviously interact with objects every day, whereas far fewer of us interact intentionally with mathematical functions every day.
I don’t know how many people I’ve encouraged towards programming, simply by verbally walking through something they are holding or looking at and helping them describe it as an object. You can see the lightbulbs go on as they realize this isn’t some magic, it is something they can understand. Often times, I’ll do this with a browser’s web console and offer some Javascript, not because it is the “best” language in the world, but because it is incredibly accessible as long as you have any modern web browser. What happens later down the road for them, whether they pick up a different type of language or move to something besides Javascript, doesn’t really matter in that moment, only that they were able to start the journey.
Let’s talk when they’ve been interacted with for 5-10 years by a few dozen people of varying levels of experience.
I might be a little qualified to participate in that talk, because right now I’m working full time on a 7 year old 70k LOC Haskell codebase that has seen multiple generations of programmers where the codebase has changed hands across people who have never met each other. Some of those generations even only had juniors only. When I started, there were no Haskellers around, nor any kind of documentation. So I only had the code to stare at with the occasional comment that says -- yeah, I know this sucks. Mind you, this codebase had been serving customers and suffering “agile” changes in requirements with deadlines almost throughout its lifetime.
I’ve pushed my first fix the second day to a long standing problem, and started implementing new features the first week. Iteratively refactored the codebase bit by bit, never having to rewrite anything. Now after more than a year of combined refactoring, internal tooling development and new features without any pauses, I think the codebase is in a fairly decent state.
This experience made me appreciate how much beating Haskell can take and remain productive.
I’m working on a rails app that’s in the same boat (well, 12 years and 85k lines).
Probably took me two weeks to be productive instead, and the pain of people using define_method with template strings is very real, but I’m astonished how far you can get on “this is a rails app, so you know where to look for everything”.
OOP as we knew it in the 90s is and early 2000s is gone. The reflex that everything is an object is gone. The intuition to build massive taxonomies where a penguin is a bird that cannot fly and can swim but not float like a duck is also gone.
I think everything has got a bit leaner, by not specifying a whole book beforehand your taxonomy is gone, so in this example you’d still have a bird, a penguin, and a duck, but you’d not implement the flying or non-flying part if no one ever asks for it.
Maybe I’m not reflecting enough, but to my our current ~4 year old C++ code base looks roughly like a 20y old Java code base, if I’m looking at the OOP aspect. And I think that’s a good thing, there’s nothing wrong with it, unless you hate OOP on principle.
I actually don’t care what the parent class of the UserApi is, and it uses objects with their own inheritance. it’s as classic OOP as I can imagine and it works just fine. Sure 100 other things are different than in the 20y old Java project, but not the interaction of classes and objects.
Incidentally, until I used it more I found multiple inheritance to be a horrible idea, but right now I kinda like it. Our judicious use reminds me more of traits and it’s a lot better than a 7-layer-inheritance that you often see in Java with cramming different things in different layers of 4 abstract classes.
OOP was already dominant by the time Java came around. UML is older than Java! Java didn’t become popular from web browsers, it was primarily Sun’s marketing, the JVM, and OOP fad that drove it. Inheritance is the oldest-ish form of code reuse polymorphism and most modern successors were in some way inspired by this. You can’t talk about how OOP is mainstream without talking about all of the historical context that lead to it.
The term ‘object oriented programming’ was coined by Alan Kay to describe the style in which he wrote Lisp code in the ‘60s but it wasn’t really until Smalltalk-80 that it became a popular industry buzzword.
I went to two universities (one for undergrad, one for grad) and both of them had a similar intro to programming curriculum. Teach Java, go really quickly over variables, types, conditionals and loops, and then, as quickly as possible, introduce OO design. The students have not even written a program more than 50 lines of code that they are immediately told that OO is the right way to structure very large programs.
Haskell and Scala are #14 and #15 on the “most loved” list, and nowhere to be seen in the “wanted” list.
They actually are #17 and #18 in the “most wanted” list. Also, I find this a bit shocking, but both of these languages are actually ranking higher up the “most dreaded” languages list: #11 and #12.
Meh. They’re over-selling the emotion there. The question was apparently more akin to “I use this language and I am / am not interested in continuing to use it.”
When you said you used a language, If you reported you wanted to keep using it they labelled it “loved” and if not, they labelled it “dreaded.”
I used to use Scala a lot. I was more than happy to keep using it, but that doesn’t mean I loved it. Once I started using Rust, I dropped Scala. That doesn’t mean I dread it.
I don’t think the analogy holds up. I think UTF-16 is basically a dead end, and may go away. And UTF-16 did not influence UTF-8 in any way.
In contrast, it seems clear that Java and C++ influenced Go and Rust (for example, as well as Swift/Kotlin, etc.).
Go interfaces came from or were strongly influenced by Java AFAIK. I think there are interviews with the designers saying that. (Also, Clojure protocols are Java interfaces – and they are a primary thing distinguishes Clojure from say Racket. Hickey also says that Java interfaces are one of its better parts.)
Rust uses a traits, but I would call that object oriented, or I’d be interested in an argument otherwise. Rust code uses mutable state and method calls in a way that’s much like C++ or Java (obviously with some critical static guarantees).
So C++ and Java probably have some deficiencies in their model, but they clearly influenced other languages. (And FWIW I think the original post is mostly mistaken in its premise…)
Rust traits are directly inspired by Haskell typeclasses – unlike Java style interfaces which are all implemented together in one class Something {}, each impl is its own block and can be declared not only in the same place as the type, but also in the same place as the trait, so you have much more power as a trait writer.
The fact that Rust has method call syntax doesn’t make it OO. “Java OO” usually mostly refers to “can has inheritance”, and “real OO” (Smalltalk-Ruby-ObjC style) is kinda all about very dynamic late binding stuff. Rust does neither.
C++ is influential indeed.. in ways that don’t have much to do with OO – deterministic destruction (“RAII”) and template metaprogramming.
Agreed with everything you said except that I wonder whether the “OO = inheritance” is either true or a useful equivalence.
In a practical sense, a Trait hierarchy with default impls is, like 90% of the point of inheritance. I think the real difference between, e.g., Java and Rust as far as how real programs are written is that Java encourages the “wanted a banana but got the jungle and a gorilla holding the banana” design problem, whereas that’s almost impossible to even do in Rust. I’m not sure how large Go projects end up looking, as I’ve only worked on small projects in Go.
Agreed with everything you said except that I wonder whether the “OO = inheritance” is either true or a useful equivalence.
More accurate to say OO = ad-hoc polymorphism. But there are other kinds, take Haskell/Rust/Idris (others i’m sure but whatever) which have parametric polymorphism. The “OO” way is not the only way to do polymorphism, and even in Haskell as an example you can do both kinds of polymorphism.
I do think the analogy is a bit deeper than that. Both utf-16 and utf-8 encode Unicode, it’s just that the first is a patch over UCS-2, because back then it wasn’t obvious that 65536 symbols is not that many. But both utf-16 and utf-8 are big improvements over a zoo of 8-bit encodings.
Like utf-16 is the old thing with a hack on top, modern OOP is 90s OOP with “please don’t use inheritance and also make all your objects immutable”.
More serious answer:
Obviously yes, 90s OOP is hugely influential. I’d personally name first-class dynamic dispatch (interfaces) and x.foo() vs foo(x) syntax as its two most important positive contributions.
Yeah I would agree with that. Like a lot of these discussions, it boils down to what “OOP” means, which seems to be getting more and more slippery as time passes.
But I agree specifically that dynamic dispatch with syntax was huge and never going away. I view Rust and Go as basically “fixing” the problems with that model, but keeping its essential nature.
i.e. C++ programmers arguably view their language as Rust programmers do theirs – it’s a multi-paradigm language, but use static dispatch where possible, as well as value semantics over reference semantics.
I agree with myfreeweb that Java interfaces did not influence Rust traits or Go interfaces (or Swift Protocols). The absolute game-changer difference is that in those languages, your type does not have to implement an “interface” at type definition. This seemingly small thing is a HUGE thing in practice that makes working with Java, PHP, and Kotlin extremely painful and gives rise to stupid design patterns like “adapters”.
Whether Rust Traits are OOP, I can see arguments both ways. Partly because the definition of “OOP” is very fluid. I’ve found that in most cases, people say “OOP” and just mean “like Java”.
I don’t think the analogy holds up. I think UTF-16 is basically a dead end, and may go away. And UTF-16 did not influence UTF-8 in any way.
UTF-16 is still very useful. UTF-32 has the advantage that code points are fixed size but that’s not a huge advantage in unicode because characters are composed of multiple code points and so it’s not really a fixed-length encoding in a practical sense. UTF-16 is as dense as UTF-32 in the worst case but denser on average (all unicode codepoints can be expressed by either one or two UTF-16 code units). In the worst case, UTF-8 and UTF-16 have the same size (any unicode code point can be represented by 1-4 UTF-8 code units).
If you are encoding text that is predominantly in the Latin alphabet, UTF-8 is often around half the size of UTF-16 but that isn’t true for all languages. In particular, CJK languages are encoded more densely in UTF-16 than UTF-8. Processing European languages can often be more efficient in UTF-16 too, because almost all characters commonly used in European (and especially Romance) languages are a single UTF-16 code unit and so your branch predictor quickly learns that the two-code-units path is not taken.
This touches on a key reason I think OOP debates get so heated; the article describes OOP as three things mashed together, and people talk about it like it’s one thing.
It turns out encapsulation is really great! But inheritance is awful, and polymorphism is … occasionally useful? So of course if you talk about these things using one word, you’re not going to have a productive or fruitful debate.
I would place inheritance in the rarely useful category.
It’s well suited to controls in a GUI and sometimes it can be a reasonable solution.
interface ICollection<T>
{
// if item is null throws ArgumentNullException
void Add(T item);
// ... many more methods here
// if item is null throws ArgumentNullException
void Remove(T item);
// if items is null throws ArgumentNullException
void AddRange(IEnumerable<T> items)
}
Interfaces don’t allow you to enforce design invariants; any class that implements this interface can choose whether or not to allow null arguments to Add, Remove, and AddRange.
An alternative to defining an interface is to use an abstract class and the template method pattern.
public abstract Collection<T>
{
protected Collection<T>()
{
// code here
}
// more code here...
public sealed void Add(T item)
{
if (item == null)
{
throw new ArgumentNullException(nameof(item));
}
AddCore(item);
}
// more code here...
public sealed void Remove(T item)
{
if (item == null)
{
throw new ArgumentNullException(nameof(item));
}
RemoveCore(item);
}
public sealed void AddRange(IEnumerable<T> items)
{
if (items == null)
{
throw new ArgumentNullException(nameof(items));
}
int itemIndex = 0;
foreach(T item in items)
{
if (item == null)
{
string message = $"Item {itemIndex} in items is null";
throw new ArgumentNullException(nameof(items), message);
}
AddCore(item);
}
}
// Child class must override this method.
protected abstract void AddCore(T item);
// other methods that the child class must override...
// Child class must override this method.
protected abstract void RemoveCore(T item);
}
This use of inheritance allows the implementer of Collection to enforce invariants and reduce the burden required to implement its API. It’s certainly not always appropriate but the same is true of any other technique.
Alternatives to this design include at least the following:
Having clients implement an interface that is passed to Collection’s constructor.
Passing higher-order functions to Collection’s constructor.
Subtyping is useful and code reuse is useful. Inheritance is a way of tightly coupling code reuse and subtyping but it leads to some odd patterns. For example, in Objective-C NSMutableString is a subclass of NSString but it’s not really subtype in a useful sense because it you can’t use it as a drop-in replacement: the immutability of NSString is an explicit and useful part of its interface. In reality you have a composition of string-like and mutable vs immutable. Being able to reuse the generic string-like logic in custom implementations is useful but conflating the ideas of ‘reuses implementation of generic string-like behaviour’ and ‘is a subtype of an immutable string’ often causes more problems than it solves. This is why traits or mixins and structural and algebraic type systems are increasingly popular.
I think in many cases people hate Java, and by extension, hate OOP. Java being the first mainstream OO language, and still the poster child for OO in many minds, gives object-orientation a worse reputation than it would have if there were other OO languages (apart from C#) one could actually earn a living with.
Most people hate the working environments in which Java is used, with their power structure, deskilling of the worker, fordist approach to software development. Wait for Go to become the new Java and they will start projecting the same hate on Go, regardless of the merit of the language design or lack thereof.
Huh? I vehemently disagree with this. A language which deskills development enables novice programmers or non-progranmers to build software and systems for themselves with less time investment required. And given that personal software will never see the scale of usage or the sheer variety of deployment conditions that corporate SaaS often sees, personal software doesn’t need to be as correct as corporate SaaS either, so trading performance and correctness for functionality is a very realistic trade-off for personal-use software.
I find these language identity wars to be odious. You don’t like Go, that’s fine. Please don’t extrapolate systemic failures from your own personal likes and dislikes.
are they? this to me is up for debate. The deskilling that comes with these languages is a problem only under a specific mode of production. Being able to split the workload across a bigger group of people, having “protocols” to standardize development, enable less skilled people to write reliable software are not bad things in themselves. They are bad things in our specific economic system.
I agree with this. Most of the programmers I know associate OOP with Java, C++, and C#. Also, using those languages, in the typical fashion results in code that is very OOP-like. It’s unfortunate though, because since about 2015 there have been alot of advances in all three of these languages. Lambda functions give an enormous amount of freedom when used instead of the typical constructs seen in older variations of Java, C++ and C#. The OOP languages have added some elements of functional programming in order to become more of a mixed language. This is a big deal. Mixed languages are becoming more popular now. Not all of OOP was a bad idea, an example of this is that Javascript, a mostly functional programming language has gotten a formal declaration for classes in recent years. In other words, they added class constructors INTO a mostly FP language. Personally I like OOP with first class functions. I’m happy with C# exactly how it is. I’ve used plenty of functional languages(haskell and clojure mostly), but I just prefer a blend of OOP and FP more than one or the other.
It’s funny you mention 2015, since around that time, a as new batch of languages appear that aren’t as heavily invested in strict OOP appear. Java and C# only accepted multi-paradigmatic programming once it was proven to be non-threatening by newer languages.
Before that, it was regarded as superfluous and academic.
What can OOP do that is unique to it?
…
Encapsulation. This means that data is generally hidden from other parts of a language—placed in a capsule, if you will.
I don’t find this definition particularly useful. To me, encapsulation is the idea that when you divide a system into subsystems, those subsystems minimize the set of concerns that other components need to know about. In that sense, it’s a design problem. This isn’t a feature of OOP any more than a variable naming convention is a feature of any particular language family. We can implement encapsulation in Java using public/private methods just as well as we can do so in scheme using closures. Or we can talk about how a web service encapsulates business logic as well as the programming language and DB it uses behind a network interface.
After recently reading “Practical Object-Oriented Design” (by Sandi Metz) I love OOP, when before I just not disliked it. It’s all about how you use it.
The problem is we were taught that OOP was about classes when it is really about passing messages.
From that perspective. some have even called Elixir “the most object-oriented language”.
At one time the most used languages in the world (by some metric with the source long lost)
Both were used extensively used to make a LOT of GUI applications back when the web was basically only good for very basic apps. I’m not saying they were the first or best, but that paradigm really took over because it made a lot of sense in UIs. The inheritance (A Button is a Control and it’s useful to iterate all Controls on a Form, etc) is actually fairly useful and the objects with events, properties and methods are fairly easy to understand.
Once you get away from writing GUIs, OOP is perhaps a less good paradigm.
Then Java and C# and python emerged during that time and that’s what we’ve been stuck with for 20 years. Anyone remember seeing your first Java GUI app (I remember being really weird and purple of all things)? The early use cases were desktop Gui apps where again OOP makes a lot of sense.
I don’t think there’s much to debate. OOP is a decent paradigm for GUI apps. If you aren’t making a GUI another paradigm may be better suited.
These days the state of the practice UIs are in react, et al which are still evolving but appear to be moving in a functional direction. It will be interesting to see how that evolves, but it’s still classes/objects/events/methods in Angular land.
I was taught, in AP CS almost 20 years ago, basically that OOP is The Way serious enterprise programmers tackle complexity. The Java language, having sidestepped the memory safety issue with a performant-enough GC, allowed us to focus more solely on building CompoundNounTower hierarchies to manage a huge mutable state, replete with micromanagers for mutable ministates, higher OOP coding consisting of applying patterns (never macros) &c &c. I now look back and see, at least in part, that the AP CS program curriculum crudely modeled a 1-year slice of some sort of medieval training program for cog-replaceable enterprise Java programmers, and hardly spent any time looking at things purely mathematically, as we did in college. Like SICP.
Having learned other paradigms, other languages, other runtimes, I don’t harbor ill will towards OOP (I mean who doesn’t like Ruby) and think recent Java (and all it inspired, Scala, Kotlin..) is not really that bad at all, past jobs writing Java still no hard feelings, but the force of language change, without any seeming benefit, really soured a lot of us. I imagine it was a popular course nationwide. Pretty much all of us just wanted to go back to coding Bloodshed Dev-C++, which we knew fine, but they probably should’ve just taught SICP, honestly.
b) they’re stuck with it (there’s no better alternative)
The latter only really happens when a technology is widespread in a given domain. It follows that OOP is hated because of its ubiquity, not in spite of it.
A failed technology isn’t one that nobody likes; it’s one that nobody remembers.
As a Python dev, there are two aspects to OOP that I see that are if not always bad, at least complicated:
It encourages clumping together functions and data. This gets particularly hairy when data is being mutated.
It encourages the use of the more “advanced” syntactic bits of python.
Almost all the places I see OOP used, it becomes an intractable mess pretty quickly, where “plain data” plus a “bag of namespaced functions” would have solved the problem just fine.
Having said that, I think there are use cases where the tractability (i.e. simply being able to follow the code) is outweighed by having a cutesy interface - this is in the case of super generic, super well-tested, libraries. As an example, even though following the SQLAlchemy source is a nightmare, it’s worth it for an interface that maps so well to the domain.
I’m not convinced everybody hates it. I think a certain group of programmers dislike it a lot and are quite vocal about it, while the vast majority use it every day and don’t think about it too much. Even the name is somewhat misleading to me, in practice its really class based programming. I don’t think most programmers are sitting there being like “the problem with this project is the fundamental nature of this language”.
Technology loves these grand, sweeping statements of “this is terrible and this is excellent”. The truth is always more nuanced than that. I suspect many people hate OOP because they work in large OOP codebases every day that are hard to work in, because despite the fact that OOP doesn’t force you to write bad code. It turns out that’s not how humans interact with tools. For me especially the “drift” of the object I’m interacting with over time (customer becomes a different thing as the product evolves) can be frustrating. I bet if we all worked with large functional programming codebases everyday there would be a lot of conversations about how much we all hate that.
Programming is hard and I think it is great that we, as a community, are constantly trying to rework software to make it more reliable and usable. In my experience OOP in the hands of experts is easy to maintain and flexible. When less experienced people get involved in the codebase, it gets harder to work with. Right now what I see is smaller teams writing functional programming applications and being like “this is the right way to do things”. Let’s talk when they’ve been interacted with for 5-10 years by a few dozen people of varying levels of experience.
I think that when I have to work with JavaScript. :p
This 100%, but more general. Any paradigm, in the hands of people who understand how to use the paradigm, will be maintainable and flexible. That’s why it’s so incumbent on people who are experienced to be good and empathetic mentors to people who aren’t experienced with the paradigm.
I’ve worked, as I’m sure many here have, with just about every programming mentality around, and none of them are any good – they’re just tools, and any tool has jobs it is suited for and jobs it is suited against. Some things naturally lend themselves to Object-based thinking, some to procedural, some to functional. I’d never want to write a OOP Theorem Prover any more than I’d want to write a procedural CMS or a functional video game. This not to say it’s not possible to do those things, but rather, I – personally – don’t find it easy to think about those things in those paradigms.
Computing oughta be a big tent, and there’s plenty of room for people to figure out what set of tools make sense for them.
PS, this goes for strong vs weak vs no typing too.
I took the title to be a fairly sarcastic comment, given the contents of the post. However, I would agree that the premise is a non-starter if meant seriously.
I think one of the biggest advantages to OOP which isn’t mentioned in the article outside of the quote he uses comes from the mental model you can use when teaching someone who has no knowledge of programming. As humans we all quite obviously interact with objects every day, whereas far fewer of us interact intentionally with mathematical functions every day.
I don’t know how many people I’ve encouraged towards programming, simply by verbally walking through something they are holding or looking at and helping them describe it as an object. You can see the lightbulbs go on as they realize this isn’t some magic, it is something they can understand. Often times, I’ll do this with a browser’s web console and offer some Javascript, not because it is the “best” language in the world, but because it is incredibly accessible as long as you have any modern web browser. What happens later down the road for them, whether they pick up a different type of language or move to something besides Javascript, doesn’t really matter in that moment, only that they were able to start the journey.
I might be a little qualified to participate in that talk, because right now I’m working full time on a 7 year old 70k LOC Haskell codebase that has seen multiple generations of programmers where the codebase has changed hands across people who have never met each other. Some of those generations even only had juniors only. When I started, there were no Haskellers around, nor any kind of documentation. So I only had the code to stare at with the occasional comment that says
-- yeah, I know this sucks
. Mind you, this codebase had been serving customers and suffering “agile” changes in requirements with deadlines almost throughout its lifetime.I’ve pushed my first fix the second day to a long standing problem, and started implementing new features the first week. Iteratively refactored the codebase bit by bit, never having to rewrite anything. Now after more than a year of combined refactoring, internal tooling development and new features without any pauses, I think the codebase is in a fairly decent state.
This experience made me appreciate how much beating Haskell can take and remain productive.
I’m working on a rails app that’s in the same boat (well, 12 years and 85k lines).
Probably took me two weeks to be productive instead, and the pain of people using
define_method
with template strings is very real, but I’m astonished how far you can get on “this is a rails app, so you know where to look for everything”.OOP as we knew it in the 90s is and early 2000s is gone. The reflex that everything is an object is gone. The intuition to build massive taxonomies where a penguin is a bird that cannot fly and can swim but not float like a duck is also gone.
If only this were true.
Yeah I’m also not sold on this.
I think everything has got a bit leaner, by not specifying a whole book beforehand your taxonomy is gone, so in this example you’d still have a bird, a penguin, and a duck, but you’d not implement the flying or non-flying part if no one ever asks for it.
Maybe I’m not reflecting enough, but to my our current ~4 year old C++ code base looks roughly like a 20y old Java code base, if I’m looking at the OOP aspect. And I think that’s a good thing, there’s nothing wrong with it, unless you hate OOP on principle.
I actually don’t care what the parent class of the UserApi is, and it uses objects with their own inheritance. it’s as classic OOP as I can imagine and it works just fine. Sure 100 other things are different than in the 20y old Java project, but not the interaction of classes and objects.
Incidentally, until I used it more I found multiple inheritance to be a horrible idea, but right now I kinda like it. Our judicious use reminds me more of traits and it’s a lot better than a 7-layer-inheritance that you often see in Java with cramming different things in different layers of 4 abstract classes.
Do people really not recognise a “loud minority” when they see it?
hisses in historian
OOP was already dominant by the time Java came around. UML is older than Java! Java didn’t become popular from web browsers, it was primarily Sun’s marketing, the JVM, and OOP fad that drove it. Inheritance is the oldest-ish form of
code reusepolymorphism and most modern successors were in some way inspired by this. You can’t talk about how OOP is mainstream without talking about all of the historical context that lead to it.Wasn’t the general popularity of windowing GUI programming a factor too?
Yeah, this was a big factor too.
Sub-procedures/functions are a bit older. Inheritance, ultimately, is just type-based function dispatch, be it at compile time or runtime.
The term ‘object oriented programming’ was coined by Alan Kay to describe the style in which he wrote Lisp code in the ‘60s but it wasn’t really until Smalltalk-80 that it became a popular industry buzzword.
I went to two universities (one for undergrad, one for grad) and both of them had a similar intro to programming curriculum. Teach Java, go really quickly over variables, types, conditionals and loops, and then, as quickly as possible, introduce OO design. The students have not even written a program more than 50 lines of code that they are immediately told that OO is the right way to structure very large programs.
Source: https://insights.stackoverflow.com/survey/2020#most-loved-dreaded-and-wanted
Haskell and Scala are #14 and #15 on the “most loved” list, and nowhere to be seen in the “wanted” list.
They actually are #17 and #18 in the “most wanted” list. Also, I find this a bit shocking, but both of these languages are actually ranking higher up the “most dreaded” languages list: #11 and #12.
Meh. They’re over-selling the emotion there. The question was apparently more akin to “I use this language and I am / am not interested in continuing to use it.”
When you said you used a language, If you reported you wanted to keep using it they labelled it “loved” and if not, they labelled it “dreaded.”
I used to use Scala a lot. I was more than happy to keep using it, but that doesn’t mean I loved it. Once I started using Rust, I dropped Scala. That doesn’t mean I dread it.
Tongue in cheek answer: for the same reason that UTF-16 is popular.
Both features correlate with the 90’s generation of languages, which are still the most popular languages today.
I don’t think the analogy holds up. I think UTF-16 is basically a dead end, and may go away. And UTF-16 did not influence UTF-8 in any way.
In contrast, it seems clear that Java and C++ influenced Go and Rust (for example, as well as Swift/Kotlin, etc.).
Go interfaces came from or were strongly influenced by Java AFAIK. I think there are interviews with the designers saying that. (Also, Clojure protocols are Java interfaces – and they are a primary thing distinguishes Clojure from say Racket. Hickey also says that Java interfaces are one of its better parts.)
Rust uses a traits, but I would call that object oriented, or I’d be interested in an argument otherwise. Rust code uses mutable state and method calls in a way that’s much like C++ or Java (obviously with some critical static guarantees).
So C++ and Java probably have some deficiencies in their model, but they clearly influenced other languages. (And FWIW I think the original post is mostly mistaken in its premise…)
Rust traits are directly inspired by Haskell typeclasses – unlike Java style interfaces which are all implemented together in one
class Something {}
, each impl is its own block and can be declared not only in the same place as the type, but also in the same place as the trait, so you have much more power as a trait writer.The fact that Rust has method call syntax doesn’t make it OO. “Java OO” usually mostly refers to “can has inheritance”, and “real OO” (Smalltalk-Ruby-ObjC style) is kinda all about very dynamic late binding stuff. Rust does neither.
C++ is influential indeed.. in ways that don’t have much to do with OO – deterministic destruction (“RAII”) and template metaprogramming.
Agreed with everything you said except that I wonder whether the “OO = inheritance” is either true or a useful equivalence.
In a practical sense, a Trait hierarchy with default impls is, like 90% of the point of inheritance. I think the real difference between, e.g., Java and Rust as far as how real programs are written is that Java encourages the “wanted a banana but got the jungle and a gorilla holding the banana” design problem, whereas that’s almost impossible to even do in Rust. I’m not sure how large Go projects end up looking, as I’ve only worked on small projects in Go.
More accurate to say OO = ad-hoc polymorphism. But there are other kinds, take Haskell/Rust/Idris (others i’m sure but whatever) which have parametric polymorphism. The “OO” way is not the only way to do polymorphism, and even in Haskell as an example you can do both kinds of polymorphism.
Tongue in cheek arguing:
I do think the analogy is a bit deeper than that. Both utf-16 and utf-8 encode Unicode, it’s just that the first is a patch over UCS-2, because back then it wasn’t obvious that 65536 symbols is not that many. But both utf-16 and utf-8 are big improvements over a zoo of 8-bit encodings.
Like utf-16 is the old thing with a hack on top, modern OOP is 90s OOP with “please don’t use inheritance and also make all your objects immutable”.
More serious answer:
Obviously yes, 90s OOP is hugely influential. I’d personally name first-class dynamic dispatch (interfaces) and
x.foo()
vsfoo(x)
syntax as its two most important positive contributions.Yeah I would agree with that. Like a lot of these discussions, it boils down to what “OOP” means, which seems to be getting more and more slippery as time passes.
But I agree specifically that dynamic dispatch with syntax was huge and never going away. I view Rust and Go as basically “fixing” the problems with that model, but keeping its essential nature.
Even Python was influenced by C++:
http://python-history.blogspot.com/2009/02/adding-support-for-user-defined-classes.html
http://python-history.blogspot.com/2010/06/new-style-classes.html
And it should be noted that in the C++ world there is a pretty distinct trend of doing polymorphism without inheritance, e.g.
https://www.youtube.com/watch?v=PSxo85L2lC0
https://www.youtube.com/watch?v=gVGtNFg4ay0
i.e. C++ programmers arguably view their language as Rust programmers do theirs – it’s a multi-paradigm language, but use static dispatch where possible, as well as value semantics over reference semantics.
I agree with myfreeweb that Java interfaces did not influence Rust traits or Go interfaces (or Swift Protocols). The absolute game-changer difference is that in those languages, your type does not have to implement an “interface” at type definition. This seemingly small thing is a HUGE thing in practice that makes working with Java, PHP, and Kotlin extremely painful and gives rise to stupid design patterns like “adapters”.
Whether Rust Traits are OOP, I can see arguments both ways. Partly because the definition of “OOP” is very fluid. I’ve found that in most cases, people say “OOP” and just mean “like Java”.
UTF-16 is still very useful. UTF-32 has the advantage that code points are fixed size but that’s not a huge advantage in unicode because characters are composed of multiple code points and so it’s not really a fixed-length encoding in a practical sense. UTF-16 is as dense as UTF-32 in the worst case but denser on average (all unicode codepoints can be expressed by either one or two UTF-16 code units). In the worst case, UTF-8 and UTF-16 have the same size (any unicode code point can be represented by 1-4 UTF-8 code units).
If you are encoding text that is predominantly in the Latin alphabet, UTF-8 is often around half the size of UTF-16 but that isn’t true for all languages. In particular, CJK languages are encoded more densely in UTF-16 than UTF-8. Processing European languages can often be more efficient in UTF-16 too, because almost all characters commonly used in European (and especially Romance) languages are a single UTF-16 code unit and so your branch predictor quickly learns that the two-code-units path is not taken.
That’s not actually true: you can get polymorphism without inheritance, e.g. with overloading or generic functions.
This touches on a key reason I think OOP debates get so heated; the article describes OOP as three things mashed together, and people talk about it like it’s one thing.
It turns out encapsulation is really great! But inheritance is awful, and polymorphism is … occasionally useful? So of course if you talk about these things using one word, you’re not going to have a productive or fruitful debate.
Say what you mean.
I would place inheritance in the rarely useful category.
It’s well suited to controls in a GUI and sometimes it can be a reasonable solution.
Interfaces don’t allow you to enforce design invariants; any class that implements this interface can choose whether or not to allow null arguments to Add, Remove, and AddRange.
An alternative to defining an interface is to use an abstract class and the template method pattern.
This use of inheritance allows the implementer of Collection to enforce invariants and reduce the burden required to implement its API. It’s certainly not always appropriate but the same is true of any other technique.
Alternatives to this design include at least the following:
Subtyping is useful and code reuse is useful. Inheritance is a way of tightly coupling code reuse and subtyping but it leads to some odd patterns. For example, in Objective-C
NSMutableString
is a subclass ofNSString
but it’s not really subtype in a useful sense because it you can’t use it as a drop-in replacement: the immutability ofNSString
is an explicit and useful part of its interface. In reality you have a composition of string-like and mutable vs immutable. Being able to reuse the generic string-like logic in custom implementations is useful but conflating the ideas of ‘reuses implementation of generic string-like behaviour’ and ‘is a subtype of an immutable string’ often causes more problems than it solves. This is why traits or mixins and structural and algebraic type systems are increasingly popular.I think in many cases people hate Java, and by extension, hate OOP. Java being the first mainstream OO language, and still the poster child for OO in many minds, gives object-orientation a worse reputation than it would have if there were other OO languages (apart from C#) one could actually earn a living with.
Most people hate the working environments in which Java is used, with their power structure, deskilling of the worker, fordist approach to software development. Wait for Go to become the new Java and they will start projecting the same hate on Go, regardless of the merit of the language design or lack thereof.
You don’t hate OOP, you hate capitalism.
Go is actually explicitly designed for that approach to development.
I know, that’s why I mentioned Go specifically. Give it 10-15 years and all the fanboying will die out.
Huh? I vehemently disagree with this. A language which deskills development enables novice programmers or non-progranmers to build software and systems for themselves with less time investment required. And given that personal software will never see the scale of usage or the sheer variety of deployment conditions that corporate SaaS often sees, personal software doesn’t need to be as correct as corporate SaaS either, so trading performance and correctness for functionality is a very realistic trade-off for personal-use software.
I find these language identity wars to be odious. You don’t like Go, that’s fine. Please don’t extrapolate systemic failures from your own personal likes and dislikes.
That is true, but both Java and Go are made to solve the capitalist’s problem, not yours or mine.
are they? this to me is up for debate. The deskilling that comes with these languages is a problem only under a specific mode of production. Being able to split the workload across a bigger group of people, having “protocols” to standardize development, enable less skilled people to write reliable software are not bad things in themselves. They are bad things in our specific economic system.
I agree with this. Most of the programmers I know associate OOP with Java, C++, and C#. Also, using those languages, in the typical fashion results in code that is very OOP-like. It’s unfortunate though, because since about 2015 there have been alot of advances in all three of these languages. Lambda functions give an enormous amount of freedom when used instead of the typical constructs seen in older variations of Java, C++ and C#. The OOP languages have added some elements of functional programming in order to become more of a mixed language. This is a big deal. Mixed languages are becoming more popular now. Not all of OOP was a bad idea, an example of this is that Javascript, a mostly functional programming language has gotten a formal declaration for classes in recent years. In other words, they added class constructors INTO a mostly FP language. Personally I like OOP with first class functions. I’m happy with C# exactly how it is. I’ve used plenty of functional languages(haskell and clojure mostly), but I just prefer a blend of OOP and FP more than one or the other.
It’s funny you mention 2015, since around that time, a as new batch of languages appear that aren’t as heavily invested in strict OOP appear. Java and C# only accepted multi-paradigmatic programming once it was proven to be non-threatening by newer languages.
Before that, it was regarded as superfluous and academic.
I don’t find this definition particularly useful. To me, encapsulation is the idea that when you divide a system into subsystems, those subsystems minimize the set of concerns that other components need to know about. In that sense, it’s a design problem. This isn’t a feature of OOP any more than a variable naming convention is a feature of any particular language family. We can implement encapsulation in Java using public/private methods just as well as we can do so in scheme using closures. Or we can talk about how a web service encapsulates business logic as well as the programming language and DB it uses behind a network interface.
After recently reading “Practical Object-Oriented Design” (by Sandi Metz) I love OOP, when before I just not disliked it. It’s all about how you use it.
The problem is we were taught that OOP was about classes when it is really about passing messages. From that perspective. some have even called Elixir “the most object-oriented language”.
VB and C++
At one time the most used languages in the world (by some metric with the source long lost)
Both were used extensively used to make a LOT of GUI applications back when the web was basically only good for very basic apps. I’m not saying they were the first or best, but that paradigm really took over because it made a lot of sense in UIs. The inheritance (A Button is a Control and it’s useful to iterate all Controls on a Form, etc) is actually fairly useful and the objects with events, properties and methods are fairly easy to understand.
Once you get away from writing GUIs, OOP is perhaps a less good paradigm.
Then Java and C# and python emerged during that time and that’s what we’ve been stuck with for 20 years. Anyone remember seeing your first Java GUI app (I remember being really weird and purple of all things)? The early use cases were desktop Gui apps where again OOP makes a lot of sense.
I don’t think there’s much to debate. OOP is a decent paradigm for GUI apps. If you aren’t making a GUI another paradigm may be better suited.
These days the state of the practice UIs are in react, et al which are still evolving but appear to be moving in a functional direction. It will be interesting to see how that evolves, but it’s still classes/objects/events/methods in Angular land.
I was taught, in AP CS almost 20 years ago, basically that OOP is The Way serious enterprise programmers tackle complexity. The Java language, having sidestepped the memory safety issue with a performant-enough GC, allowed us to focus more solely on building CompoundNounTower hierarchies to manage a huge mutable state, replete with micromanagers for mutable ministates, higher OOP coding consisting of applying patterns (never macros) &c &c. I now look back and see, at least in part, that the AP CS program curriculum crudely modeled a 1-year slice of some sort of medieval training program for cog-replaceable enterprise Java programmers, and hardly spent any time looking at things purely mathematically, as we did in college. Like SICP.
Having learned other paradigms, other languages, other runtimes, I don’t harbor ill will towards OOP (I mean who doesn’t like Ruby) and think recent Java (and all it inspired, Scala, Kotlin..) is not really that bad at all, past jobs writing Java still no hard feelings, but the force of language change, without any seeming benefit, really soured a lot of us. I imagine it was a popular course nationwide. Pretty much all of us just wanted to go back to coding Bloodshed Dev-C++, which we knew fine, but they probably should’ve just taught SICP, honestly.
I think that people only hate a technology when:
a) it’s frustrating to use, and
b) they’re stuck with it (there’s no better alternative)
The latter only really happens when a technology is widespread in a given domain. It follows that OOP is hated because of its ubiquity, not in spite of it.
A failed technology isn’t one that nobody likes; it’s one that nobody remembers.
As a Python dev, there are two aspects to OOP that I see that are if not always bad, at least complicated:
Almost all the places I see OOP used, it becomes an intractable mess pretty quickly, where “plain data” plus a “bag of namespaced functions” would have solved the problem just fine.
Having said that, I think there are use cases where the tractability (i.e. simply being able to follow the code) is outweighed by having a cutesy interface - this is in the case of super generic, super well-tested, libraries. As an example, even though following the SQLAlchemy source is a nightmare, it’s worth it for an interface that maps so well to the domain.