1. 27
    1. 38

      implementation-inheritance: bad

      interface-inheritance: good

      1. 11

        Multiple-implementation-inheritance: maybe I should take up carpentry

        1. 3

          I think what you mean by “interface-inheritance” is not really inheritance. You don’t “inherit” from an interface but you “implement” an interface. Some languages cannot distinguish between interfaces and classes, so they conflate both as “inheritance”.

          The article does not talk about “interface-inheritance” at all. It mentions “three ways that we mean inheritance” and they are all “implementation-inheritance”.

          1. 3

            I don’t think so. You can have interface A and interface B and then create interface C that inherits/extends from A and B and maybe adds some stuff on its own. There is nothing implemented at that point yet.

            1. 2

              Ok, I can agree that interface-inheritance like this does exist.

        2. 17

          Because it was taught as the way to build software for 20+ years, because it solved real problems. They’re just problems that can usually be better solved in other ways.

          1. 13

            Software development shouldn’t be religion. That’s the long and the short of it.

            Those who preached inheritance, and used it as their golden hammer, made a huge mess of things.

            Those who preach that inheritance is evil and should never be used, also made a huge mess of things.

            Inheritance is just a tool. A widely varying tool, from the languages that I’ve used (including Smalltalk, Eiffel, C++, Java, C#, Javascript, Go, Delphi, etc.)

            Inheritance is handy, which is why it is used. But (like many tools) if it is blindly used (or over-used), it can easily create a mess. I’ve personally created more than one mess with it.

            At any rate, don’t listen to people who get all religious about their software development beliefs. Our industry has had a ton of fads, and inheritance is just one of them. Just learn your language(s) and tool(s) well, share what you know with others, and keep thinking critically – most importantly of your own work.

            1. 12

              Those who preached inheritance, and used it as their golden hammer, made a huge mess of things.
              Those who preach that inheritance is evil and should never be used, also made a huge mess of things.

              I wouldn’t put those two at the same level. I’d much rather deal with a purely procedural codebase than one that misuses or overuses inheritance all over the place.

              1. 4

                Those who preach that inheritance is evil and should never be used, also made a huge mess of things.

                This is an odd comment. Plenty of languages don’t have inheritance (the vast majority of them, even). What does that mean for code written in those languages if this comment reflects some truth?

                I’m not defending the strawman you’ve conjured up here, but you’re treating the two positions as if they’re similarly nonsensical. But they’re not: one advocates for the use of a tool, but the other is saying “no, I’d rather not”. And we know where the burden of truth should lie here: it’s the responsibility of the person advocating adding a tool to a tool belt to justify the value of that tool.

                1. 5

                  I think the OP’s point is that bad programmers can make a mess in any language, with any abstraction.

                  I’m sure I’m not the only one who’s seen code abused into shape to use map, fold, and the functional operators when a loop would have been more understandable. Inheritance is the same - if it’s making a mess then it wasn’t a good abstraction for modelling the situation.

                  Generally I don’t like the idea of avoiding abstractions and removing language features to protect from bad programmers. When a bad carpenter hammers in a screw, we don’t get rid of hammers.

                  1. 2

                    I think the OP’s point is that bad programmers can make a mess in any language, with any abstraction.

                    Sure, and that’s not wrong, but it doesn’t bear any relevance - at all - to the debate about whether inheritance is more valuable than harmful to a codebase.

                    Generally I don’t like the idea of avoiding abstractions and removing language features to protect from bad programmers. When a bad carpenter hammers in a screw, we don’t get rid of hammers.

                    But again, this doesn’t tell you anything about the specific issue in the question: inheritance. Sure, we don’t get rid of hammers when someone uses them incorrectly: but it is definitely possible to build tools that are inefficient, ineffective, or downright dangerous to the user, with many better alternatives existing. I just don’t buy that you can throw the same label on all abstractions and have reverting to vague platitudes about them as a group be a sufficient discussion about their respective merits and flaws.

                    Sure: there are specific cases for which inheritance might be a worthy tool. But does that mean it pays its way in a modern language and is therefore worthy of inclusion? Or that it has sufficient protection against misuse/overuse that its harmful effects can be mitigated? That’s a more complex and subtle question to answer than just throwing your hands in the air and maintaining a state of permanent indifference.

                  2. 1

                    Sorry, I should have been clearer. I was only referring to languages with inheritance designed into the language.

                    Obviously, there is nothing wrong with designing a language that omits the capability entirely, or accomplishes the desired benefits in a different manner.

                    I know that inheritance is particularly unpopular as a concept right now, so I’m not here to try to change anyone’s mind on that aspect. I have seen plenty of messes created with it, but also lots of good working code that was built using it, so I don’t have an aversion to the concept – just the abuse of it!

                  3. 2

                    I’ve never seen anyone make a mess because they eschewed (implementation) inheritance. I’ve seen a lot of messes made because people reached for inheritance, and very few cases where people reached for inheritance and didn’t make a mess (seems mostly relegated to certain GUI frameworks and even then I could be persuaded otherwise).

                  4. 11

                    Everyone uses built-in inheritance because it’s one of very few relationships that are built-in. If languages provided other built-in relationships then programmers would use those as well. See the Garnet (Lisp)/Amulet (C++) application toolkits from CMU 30; years ago. Those toolkits describe several kinds of relationships useful for most applications. They used constraints to implement these relationships but a language could also provide them built-in without providing constraints more generally. Note the origin of this capability is in knowledge representation languages of the 1980s, e.g. CMU’s SRL, Schema Representation Language.

                    1. 5

                      That’s interesting. What are some other commonly useful relationships beyond inheritance?

                      1. 2

                        Everything, even is-a, turns out to be some flavor of composition. When components are in a relationship, it’s the details of the relationship that determines whether properties are inherited by reference (“shared value”), inherited by copy (“shared initial value”), or not inherited at all (“private”).

                        Menu items in a menu

                        Grouped objects in a drawing

                        The currently selected set of objects in a drawing

                        Multicell macros on a gate array

                        These are are very similar compositions but they each have distinct idiosyncrasies that require a fair bit of one-off imperative code in most languages. But fundamentally the effect of each kind of relationship is not different from what most OO languages provide specifically for is-a

                        1. 1
                          • Traits/typeclasses
                          • Structural subtyping
                          • Mixins
                          • ECS-style component patterns (I’m not aware of a language that has first-class support for this, but I’m sure they exist!)
                          1. 1

                            Of these four, the first three are more alike and more like “traditional inheritance”. ECS is the standout and not usually supported by a language. I would not be surprised, I agree, if there’s some language(s) that support ECS pretty well.

                      2. 8

                        This is still my favorite explanation of the kinds of subclassing and when to use them: https://hynek.me/articles/python-subclassing-redux/

                        1. 5

                          Not 100% sure of this, C++’s Abstract Base Classes might have been earlier than interfaces

                          They are. Pure-virtual methods have been around for a long time, as well as the pattern of a pure-virtual base class. Interfaces date to about 1990 (i think from William Cook’s OOPSLA paper) and were extracted from the widely-known patterns of abstract base classes and multiple-inheritance.

                          FWIW, I still think inheritance is great, whatever you young hipsters say. I cut my teeth on it in Smalltalk in the mid-80s, reading through the source code of the runtime, which was OOP all the way down.

                          1. 4

                            I, too, don’t mind inheritance. It’s like anything else, it can be done badly, and it can be done annoyingly, but it’s just a tool in a toolbox. There’s also, almost always, a lot more stylistic or personal preference than we sometimes like to admit in this field :-)

                            1. 3

                              I don’t think it’s a coincidence that inheritance and the GUI came along for the ride, considering the influence of Smalltalk.

                            2. 4

                              I think OOP is a highly generic tool that can be applied in the right places and it’s very useful in those cases but if you try to use it to solve every problem it’s not always the best tool for the job.

                              1. 4

                                Because being described as “bad” is popular, not correct.

                                If you want to have a framework that has default behavior, but can be extended or changed easily by a user, is there anything better than inheritance? It’s no surprise that most platforms are programmed through inheritance based frameworks.

                                On the other hand, modeling a problem domain through inheritance can be problematic, yet books, courses and programmers never get beyond scratching their heads at animals and shapes. Poor programmers, not a poor paradigm feature. Go figure!

                                1. 4

                                  There’s actually three ways that we mean inheritance:
                                  2. Abstract data type inheritance is about substitution: this thing behaves in all the ways that thing does and has this behaviour (this is the Liskov substitution principle)

                                  I’ve always assumed that this idea originated in abstract algebra, which developed during the 19th century and early 20th. For example, Monoid and Group are algebraic structures. Group “inherits” from Monoid in the sense of #2 above: every Group is also a Monoid. Early computer scientists were certainly aware of algebra.

                                  Version #2 of inheritance is a form of abstraction. It has nothing to do with modules or encapsulation. @hwayne speculates that this idea, which we call the Liskov substitution principle, came from Simula-67 via CLU. It’s certainly true that Liskov was looking for a way to encapsulate the implementation of a data type in a module, so that the only thing exposed by the module was the data type’s interface, a kind of algebraic structure. The Algebraic Specification idea was being developed at the same time Liskov was working, and they were focused on specifying algebraic properties in an abstract way, similar to standard mathematics, without Liskov’s focus on encapsulation and information hiding.

                                  Here’s a quote that might be relevant to the historical roots of all this:

                                  The seed was planted by Steve Zilles on October 3, 1973. During a programming language workshop organized by Barbara Liskov, he presented three simple equations relating operations on sets, and argued that anything that could reasonably be called a set would satisfy these axioms, and that anything that satisfied these axioms could reasonably be called a set.

                                  And that’s the Liskov substitution principle right there, lifted straight from undergraduate mathematics. Steve Zilles was working on Algebraic Specification at the time.

                                  In modern languages, this idea that a data type interface is an algebraic structure, and that different interfaces are related by “interface inheritance”, is clearly seen in Haskell type classes, where many standard type classes are copied directly from abstract algebra and category theory. Eg, “monads”. Rust traits are an imperative version of Haskell type classes.

                                  1. 1

                                    It looks like it was this workshop, possibly this session.

                                    1. 1

                                      Zilles is a co author of the “Programming with Abstract Data Types” paper, so yep probably makes sense here.

                                    2. 1

                                      My only “concern” about these laws is that they are only documented in human-readable form, AFAIK. I wouldn’t be surprised if Haskell would already have something like this available, but I guess we could quite effectively fuzzy test typeclass implementations automatically, given a couple of laws, say, associativity, commutativity, etc.

                                      Contracts are a runtime version of this idea, I believe.

                                      1. 3
                                    3. 3

                                      Delphi used inheritance in its forms, I never really had a problem with it. Lots of inheritance layers where it becomes hard to follow all the properties and initializations and side effects may be bad. Systems using inheritance for customization may be bad, since you have to override a lot of the behavior. So it all depends on the complexity of the code.

                                      1. 7

                                        I think the reason OOP/inheritance was such a big deal was that it’s a good fit for GUI work. That’s not really the focus of software development any more.

                                        1. 6

                                          Frontend developers waving at you

                                          Albeit arguably, modern frontend frameworks are also moving away from building widgets on inheritance to building widgets on composition (React functional components come to mind.)

                                          I’m also not sure what you’re referring to as “the focus of software development.” I thought software was a whole load of different topics with no single well-defined focus?

                                          1. 2

                                            React is not a GUI, though. It’s more like a part of a GUI framework responsible for the proper traversal of state changes.

                                            The DOM, which is the GUI layer, uses some form of inheritance at the implementation side, as do most other GUIs — this is one area which is said to be the best fit of inheritance.

                                            1. 1

                                              And much of the DOM’s interface part is built upon the underlying native toolkit which has even more inheritance.

                                          2. 6

                                            Snort. Software has no single focus, and GUIs are a huge domain that, for some reason, is just underrepresented on this site. (Not just web but native mobile/desktop apps, embedded / appliance / automotive interfaces, as well a good fraction of video game dev.)

                                            1. 2

                                              it’s a good fit for GUI work.

                                              How?

                                              1. 5

                                                Look at the typical class hierarchy of a UI toolkit. Using AppKit as an example: NSButton provides button behaviour, it inherits from NSControl which provides behaviours related to things that can be interacted with, that inherits from NSView that handles things that can be visible, that inherits from NSRespond that handles things that can have focus, and that inherits from NSObject. And of course, that can be subclassed if you have your own behaviour - but inheritance is useful because it gets you those behaviours for free.

                                                Nowadays, prototype OO and things like React show there are other ways to model GUI code, but it did exist for a reason.

                                                1. 2

                                                  React is a very debatable point to make too, because it’s a way to describe the wiring of… the underlying DOM or native widgets tree, which is very much inheritance-based.

                                                  Also prototypal OO is no better than class-based OO on that front, if it’s anything worse.

                                                  1. 1

                                                    The trouble with that is you end up with controls with 4,000 methods on them (Java Swing).

                                            2. 3

                                              I think the only time I’ve used inheritance where it actually made sense (vs using it because the language really really wanted me to) was when implementing a text editor. It’s very natural to want to declare modes which can inherit from other modes; text mode inherits from the base edit mode, and different language modes will inherit from text modes, but shell/repl type modes will inherit from a line-centric mode instead. It’s a very useful model.

                                              However, this has nothing to do with classes!

                                              I think the original sin of inheritance is actually the original sin of classes: they glom together a bunch of unrelated concepts (modularity, inheritance, polymorphism, etc) into a single concept, and (depending on the language) might force you to use that for everything. You almost always want modularity, but you can’t grab modularity without inheritance and polymorphism and friends coming along for the ride uninvited.

                                              1. 3

                                                SIMULA had a big influence on other object languages. […] And that’s the key point: inheritance came first.

                                                I don’t think this is true. Simula was an extension of ALGOL 60, which had first-class functions, which is the basic thing you need for interfaces. But Simula removed them.

                                                See my article on the matter: https://catern.com/inheritance.html which directly cites the creators of Simula:

                                                Simula solved this problem and others by banning passing many things as arguments, including passing functions as arguments. In a sense, they removed the support for first-class functions which existed in ALGOL 60, which Simula was an extension of. Predictably, this reduced the expressivity of the language:

                                                When writing simulation programs we had observed that processes often shared a number of common properties, both in data attributes and actions, but were structurally different in other respects so that they had to be described by separate declarations. […] Parametrization could not provide enough flexibility, especially since parameters called by name, including procedure parameters, had been banned for processes (for good reasons, see Section 2.3.3).

                                                – Section 3.1, The Development Of The Simula Languages

                                                1. 5

                                                  ALGOL 60, which had first-class functions

                                                  It did not!

                                                  I was reading a load of stuff about Christopher Strachey yesterday, including https://en.wikipedia.org/wiki/Fundamental_Concepts_in_Programming_Languages

                                                  In those lecture notes, he wrote:

                                                  3.5.1. First and second class objects. In ALGOL a real number may appear in an expression or be assigned to a variable, and either may appear as an actual parameter in a procedure call. A procedure, on the other hand, may only appear in another procedure call either as the operator (the most common case) or as one of the actual parameters. There are no other expressions involving procedures or whose results are procedures. Thus in a sense procedures in ALGOL are second class citizens—they always have to appear in person and can never be represented by a variable or expression (except in the case of a formal parameter), while we can write (in ALGOL still)

                                                     (if x > 1 then a else b) + 6
                                                  

                                                  when a and b are reals, we cannot correctly write

                                                     (if x > 1 then sin else cos)(x)
                                                  

                                                  nor can we write a type procedure (ALGOL’s nearest approach to a function) with a result which is itself a procedure.

                                                  (I think the first class / second class terminology is due to Strachey, probably predating these notes because it isn’t listed in the wikipedia article)

                                                  1. 1

                                                    Yes, that’s true, I am rounding ALGOL60’s procedure parameters to the nearest modern concept. Of course ALGOL60 famously made procedures second-class.

                                                    Anyway, Simula didn’t even have that, which is my point: Inheritance certainly did not come before passing functions as (abstracted and implementation-hiding, just like in modern languages) arguments.

                                                2. 3

                                                  Inheritance is perfectly good and reasonable, and used heavily in many large projects even when the language does not explicitly support it (lots of pure C projects end up constructing OO hierarchies and polymorphism just without actual language support).

                                                  The issue as always is over adherence to a single model and inappropriate application of one model to every problem.

                                                  My personal experience (from tutoring/TAing in uni, and some industry interaction with before I went all in on my career of misery :D) is that Java’s hamfisted over/forced use of inheritance conditioned a whole generation of devs to over use inheritance for everything (and the associated “Gang of Four”/Design Patterns band wagon) means that there’s a lot of code out there that people look at and go “gross! inheritance is bad/stupid!”, and a lot of that then results in people reflexively avoiding it even when it is appropriate.

                                                  Saying inheritance is bad, is similar to type polymorphism/generics are bad, higher order types are bad, etc. Like most things there are places it makes sense, and places it does not, and you can always make the wrong choice, or inherit code where someone made the wrong choice.

                                                  1. 2

                                                    I thought that Graham’s trilogy was interesting because it seems like functors, particularly inclusions, seem to behave the same way. For example, consider the functor “every Abelian group is a group” from Ab to Grp:

                                                    • Ontologically, an Abelian group is a group; it happens to satisfy one additional algebraic law.
                                                    • Typewise, the elements of an Abelian group are all also the elements of a group.
                                                    • Implementationwise, an object/actor/module which extensionally provides an Abelian group’s operators can be used in place of a group.

                                                    But the Liskov substitution principle breaks if we don’t have an inclusion, so this only applies to functors which don’t forget stuff, only forgetting at most properties and structures; in other words, the functor must be faithful, or injective on both objects and arrows.

                                                    So perhaps the rigid inheritance schemes of classic OOP, up to around C++ and D, are workable if we restrict composition to monomorphisms: inclusions, constraints, products, etc.

                                                    1. 2

                                                      In 20+ years of coding I’ve had exactly one use-case for inheritance, and in that case (UI for an Emacs application) it was quite handy.

                                                      This leads me to believe that inheritance has a a small set of cases where it is useful, and how you look at it depends on how much you work with these cases.

                                                      1. 2

                                                        Re function subtyping contravariance: https://www.cs.cornell.edu/courses/cs4120/2022sp/notes/subtyping/ says that not only Eiffel has made this mistake, but also Typescript. I’m not 100% confident, but if I’m reading https://www.typescriptlang.org/docs/handbook/type-compatibility.html#function-parameter-bivariance correctly, then τ₁ → τ₂ ≤ τ₁’ → τ₂’ only if τ₁ ≤ τ₁’ or τ₁’ ≤ τ₁

                                                        1. 2

                                                          One of the big differences between Facebook’s Flow and Microsoft’s Typescript is that Flow’s type system is sound but Typescript is unsound. The classic reason for wanting a sound type system is to allow type erasure. But Javascript doesn’t have type erasure, so Typescript’s designers deliberately chose to be more lenient.

                                                        2. 1

                                                          There is a missing link in the footnotes:

                                                          https://data.earthli.com/news/attachments/entry/820/recast.pdf

                                                          Does anyone know of a copy? It’s a very short name to try to Google…

                                                            1. 2

                                                              D’oh! Thank you!

                                                              1. 2

                                                                Doesn’t the solution described in this article address the unsound type system criticism in the blog post?

                                                          1. 1

                                                            Thank you for the link to “Why inheritance never made any sense”. It looks very interesting and the idea of ontological inheritance is new to me and I think really fascinating. Can one thing “be” a subset of another thing? Or a superset? Are these the case by virtue of properties? Are the “sphere” properties of a football “football properties” or “sphere properties”? It’s also a bit of a conflation of abstract properties with nominal properties - I can not change a sphere, but I can change a football. In fact, footballs have changed quite a lot over the decades, but we still call them footballs and we’d call the old style footballs too. I look forward to reading more.

                                                            I also appreciated the history. To be honest, I’ve always just thought of modules as namespaces. I’ll have to look at Modula. Eiffel also sounds very interesting. Always nice to read a post that teaches me stuff - as someone with only about a decade of experience, many of these things predate me.

                                                            And it doesn’t explain why inheritance became so popular in the first place.

                                                            My suspicion is that inheritance was an excellent tool for modelling mostly static domains, or well understood domains. This works well in a waterfall system where you might spend entire months on just UML and design and planning before you write code. At that point, the issues of inheritance just aren’t going to come up - you’ve basically spent your time ensuring that they won’t. Many of the issues with inheritance will crop up as your domain starts to change in front of you, or if your domain is underspecified. A football is a sphere, except oh shit actually we need to support deflated footballs now and a customer is really upset about it, but now my football inherits from Sphere and derives most of its properties from the sphere.

                                                            In waterfall those surprises might happen years down the road, and software development used to just be way slower and more methodical like that. In an agile system you’re generally coding before you even have a model, and the model sort of falls out of the code. You respond to changes as they come up, you don’t try to predict them. So trying to build your system in a way that makes changes to the model complicated (because so much of your logic is driven by the properties and behaviors being shared) is going to be painful in this “new world”.

                                                            I think this is analogous to a lot of things. Tests written very early on in a product’s design often age horribly because the product goals change. Instead of simply deleting the tests, as I would advocate, people try to change them and it leads to serious test churn, poorly documented tests, etc. But tests aren’t quite as bad and we have more modern testing methodologies to try to handle this, just as we do with domain representation in our code.

                                                            1. 6

                                                              Can one thing “be” a subset of another thing? Or a superset? Are these the case by virtue of properties? Are the “sphere” properties of a football “football properties” or “sphere properties”? It’s also a bit of a conflation of abstract properties with nominal properties - I can not change a sphere, but I can change a football. In fact, footballs have changed quite a lot over the decades, but we still call them footballs and we’d call the old style footballs too. I look forward to reading more.

                                                              One paper that blew my mind was Lexical semantics and compositionality, which made me realize

                                                              1. I implicitly assume that adjectives form subtypes: “blue car” is a subtype of “car”.
                                                              2. This is not true: “former senator” is not a subtype of “senator”.
                                                              1. 1

                                                                Thanks :D This looks very relevant to my interests.

                                                                1. 2

                                                                  One of these days I’m gonna find and read through an Intro to Linguistics textbook. Or at least the parts that deal with grammar and semantics.

                                                              2. 2

                                                                To be honest, I’ve always just thought of modules as namespaces.

                                                                My understanding is that “modules” were how languages did “generics” without object-orientation.

                                                                In C, you might have a library that provides functions for creating, updating, and traversing a linked list, and if you want linked lists of different types, you have to copy/paste the library with small modifications, or lose type-safety and make it always a list of void * values.

                                                                In C++, you can make a LinkedList<T> class that abstracts over different values for T. The create/update/traverse functions all become methods, and because it’s a class, they all use the same value for T so they’re guaranteed compatible. However, if the library provides another class like Node<T>, is that T supposed to be the same as in LinkedList<T>, or is it more like std::vector<T> and std::set<T> where they’re unrelated?

                                                                A module in the classic sense is a “generic namespace” rather than a “generic class”. If you have a linked_list<T> module, it can have anything a namespace might contain—constants, functions, types, classes, and so forth—and T means the same thing for all of them. linked_list<T>::List and linked_list<T>::Node are always going to be compatible, because there’s no way for T to differ between them - they’re always part of the same module. And they’re always going to be independent from vector<T> because that’s a different module entirely, regardless of the value of T.

                                                                1. 2

                                                                  Footballs, actually, inherit from the prolate spheriod, actually, which actually inherit from spheriods, actually

                                                                  1. 2

                                                                    That’s true in the U.S., but in most countries footballs are spheres. I thought that sentence in the post was a bit ambiguous for this reason.

                                                                    1. 1

                                                                      Except footballs in Canada, or Australia, or Ireland, or even parts of New Zealand—all of which games having a common ancestor. The word football is abiguous in English in general & is always relative to the most popular set of rules of in a given region.

                                                                      …But, actually, it was a joke about that ambiguity.

                                                                  2. 2

                                                                    Can one thing “be” a subset of another thing? Or a superset?

                                                                    It’s probably more useful to talk about what things do rather than what they are since that keeps the discussion on functional and engineering considerations, and less likely to rabbit-hole into philosophy or ontology.

                                                                    1. 1

                                                                      Well, the point was to get into that rabbit hole and then see if philosophical conclusions could relate to or improve practical considerations. For fun, that’s all.