1. 17

Throwing out all of OOP’s baggage, without the boiler plate and corporate bureaucratizing, done right, what does it have to offer?

I’d like to give OOP a fair shake. I began coding exposed to anti-big4-pattern narrative from the beginning: “Patterns are just missing language features” etc. Go’s community also opposes this Java/late 90s style OOP, offering a rather distinct primitive set (lacking inheritance and classes, offering closures… (some argue whether its object-oriented or not.)

I’ve mostly written in Lisps, then Go, Elixir, SQL, Factor and APL, exploring all paradigms besides OOP. But Common Lisp’s CLOS, Racket’s GUI or SICP showing object and functional equivalence imply that modeling with objects can be better in such cases (or why else would e.g. Racket’s designers not offer a functional GUI?) Dabbling with Pharo didn’t help.

What domains or situations lend themselves to organizing code via objects? When is storing functions as methods (i.e. in object namespaces instead of e.g. files) a better approach (to polymorphism?) (worth losing referential transparency)? What are the pros and cons of Go OOP vs. Rust vs. Java vs. Ruby vs. ObjectPascal OOP?


On page 5 of this PDF, we can see Van Roy’s programming language paradigms: http://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf of different modelling approaches.

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. – Alan Kay

How do these tools/concepts help us model our domains and problem spaces? When are they more suitable than other approaches?

  1.  

    1. 17

      TBH I don’t think you’ve defined OOP specifically enough for this question to have a meaningful answer. OOP “done right” will mean wildly different things to different people, to the point where I don’t actually believe it’s worth using the term to have meaningful conversations any more. It’s much less confusing to focus on specific aspects.

      You mentioned polymorphism, so that’s a great start. When are methods worth it? I wrote a big long post about different approaches to methods or method-like functions here: https://technomancy.us/197 Even just narrowing it to the question of methods, I’ve listed four different ways to approach it, (plus avoiding methods altogether) and they all have trade-offs. In my own opinion, tying methods to classes accomplishes virtually nothing other than making Java or Java-like programmers more comfortable, but beyond that different approaches trade off reloadability, transparency, and encapsulation in ways that don’t generalize that well across languages.

      You could ask the same question about message passing, which is what some people claim is “OOP done right”, and it’s a different discussion almost completely unrelated to the above.

      You could ask the same thing about encapsulation, but in that case the answer is going to be “it’s always worth it unless you plan to throw away the code next week”. You could ask the same thing about inheritance, but I would say it’s only useful when removed from the concept of classes altogether where you can allow data to be inherited; for example the way Emacs allows scheme-mode to inherit from programming-mode which inherits from text-mode–not a class in sight.

      1. 7

        Yeah, this is what I’ve run into. Critics of OOP will define it by the features and patterns that set “OOP languages” like Java and C# apart from other languages (e.g., inheritance or Armstrong’s observed banana-gorilla-jungle “pattern”) while proponents will rarely define OOP, often preferring to define it by what it’s not (“it’s NOT about inheritance!”) and when they’re willing to offer an affirmative definition, it’s usually around some pedestrian feature like “encapsulation” or “message passing” that exists prominently in nearly every paradigm. It’s also worth noting that proponents of OOP including authors of any number of textbooks about it in the 1990s and 2000s agreed with critics that inheritance was largely the key identifying feature of OOP, although many of today’s OOP proponents will point out that Alan Kay invented the term and he described it as being (very vaguely) about “message passing”.

        It seems like “OOP” is just a terrible term that means wildly different things to different people. My feeling was that despite Alan Kay’s original definition, we generally used to agree on its definition in the 1990s and 2000s, but that has since changed dramatically as we all came to agree that inheritance as the default reuse/abstraction mechanism was a bad idea.

        1. 9

          On top of all this, Alan Kay didn’t actually invent the term! The first attested use of it for programming languages was actually by Barbara Liskov: https://ieeexplore.ieee.org/document/1702384

          1. 4

            Here’s a conference abstract from earlier in 1976 where Jones and Liskov use the term https://dl.acm.org/doi/10.5555/800253.807680

            And an open access paper with a similar title from 1978; tho by then they were talking about strongly-typed languages instead of object-oriented languages https://dl.acm.org/doi/10.1145/359488.359493

          2. 7

            I think Alan Kay’s “OOP is message passing” was a retcon he attempted to do in the late 90s, 20 years too late. Or he’s angry that we as an industry failed to read his mind back in the 70s.

          3. 4

            Much of the struggle is wading through many terminological conflicts. Alan Kay’s quotation about message passing is possibly the safest, if we care about the sign/symbol “object-oriented”, so I ended with it.

            But I don’t care about the label, rather the chosen concept set(s). If people give different arguments for different definitions, that would seem even more insightful. That’s what I was trying to get at with Van Roy or Go vs. Rust vs. Java etc.’ approaches; how can we better model a given domain by adding so and so features?

            But when the Meta Object Protocol lets you switch between implementations at will, how do you decide what kind of object you want?

            1. 5

              Fair enough! I think message passing and methods are certainly the most interesting concepts to unpack here; most of the other concepts are kind of one-dimensional. I don’t know Golang or Rust, but I’ve found in the languages I’ve used (Clojure, Scheme, and Lua/Fennel) what feels like an inherent tension between encapsulation and repl-friendly transparency.

              For example, hiding your data in a closure (cf the old “closures are a poor man’s objects; objects are a poor man’s closures” adage) makes it nice and tidy; you can ensure it isn’t exposed to code that shouldn’t have access to it, but that also means hiding it from yourself in the repl when you’re debugging, and that kinda sucks! I don’t know of any approaches that have managed to untangle that particular gordian knot.

              1. 4

                but that also means hiding it from yourself in the repl when you’re debugging

                Both JavaScript and Julia let you access the captures of a closure, in the REPL or debugger. Is that an unusual feature?

                1. 2

                  Having it in a debugger, sure; that’s pretty common. But I have learned about 15 different languages, and this is the first I’ve heard of it working in a REPL! How does it work? Does it look like a data structure field access on the function, or what?

                  1. 4

                    In Julia, each closure is desugared into a (callable) struct. The fields of the struct are simply the names of variables captured, so you can just go f.x. So you can access it programmatically - not just as a special trick in the REPL.

                    (All this happens as a purely syntactical transformation in the first codepass prior to any semantic analysis, called “lowering”, where other things like desugaring x += 1 into x = x + 1 happen).

                    (In Javascript I may have been mistaken and was thinking of the debugger).

                    1. 3

                      Fascinating; thanks. I’ve never heard of anything like this.

                      However, it is a bit concerning that it offers programmatic access as “closures for privacy” is a pretty important encapsulation technique in many languages, and this kind of … demolishes that concept. Like … that barrier could be annoying at times, but it’s often a load-bearing barrier.

                      I guess Julia must have to use alternate measures to enable information hiding?

                      1. 2

                        No, not really. It’s a language written for scientific code (simulation, optimization, etc); hiding things isn’t necessarily helpful in that domain.

                        Note Julia does have a reasonable split between mutable and immutable data, and closures are immutable so they are read only and strongly typed (the captures may have interior mutability though).

                        But yes you can easily hand-edit the internals of a hash map at the REPL and get it to crash. Doing so by accident is, thankfully, not so easy. You tend to interact with things outside your module through interface functions/methods, so encapsulation does tend to happen in practice.

                2. 3

                  There’s also the autonomous actors approach, which I’ve seen described in glowing colors like growing a program and having it just solve the problem for you. Alas, I’ve never been able to grok that zen (and can’t find such descriptions now.) Once, I thought I had it and tried search through the living classes/objects and otherwise organize them through Prologian logic, but no dice. I’m not sure how OOP it is, but some do cool stuff along those lines:

                  we pick the features that make Erlang into an actor programming language, and on top of these we define the concept of a pengine – a programming abstraction in the form of a special kind of actor which closely mirrors the behaviour of a Prolog

                3. 3

                  Using Alan Kay’s as you have, object-oriented programming is not so much a set of language features as an architectural pattern that is more or less difficult to implement depending on the language/framework paradigm you’re working in. Your question, “What does it have to offer?” is therefore almost impossible to answer. That’s not your fault, of course. Alan Kay’s definition is so loose, it could be used to describe languages (or at least ways of programming in languages) that don’t have classes at all. With a bit of squinting, The Elm Architecture (model view update, an excellent example of functional GUI programming architecture) could even be said to be an exemplar of object oriented programming by Kay’s definition. But that flies in the face of what I assume people mean when they use this term.

                  Judging informally from decades of exposure to the ecosystems of Java, C#, JavaScript, and a bit of Python, what people really mean by OOP is tightly coupling data and logic with classes. These classes often inherit from other classes and, in a statically typed context, implement interfaces. “No, no!,” some adherents will object, “Composition over inheritance!” But then you look at the code people actually write and…

                  1. 3

                    Alan Kay’s definition is so loose, it could be used to describe languages (or at least ways of programming in languages) that don’t have classes at all.

                    I feel like that was sort of the point. Joe Armstrong called Erlang the only object oriented programming language, and here’s Alan Kay saying that Armstrong may in fact have been correct saying so. The definition isn’t loose, Kay simply narrowed down on what exactly he thought was the important part of OOP up to that point. Kay isn’t the king of what terms mean, so he can’t tell anyone what OOP must mean, but I think his definition is quite sensible.

                    1. 2

                      Alan Kay’s definition is so loose, it could be used to describe languages (or at least ways of programming in languages) that don’t have classes at all.

                      You appear to be confusing the C++ and Kay definitions of OOP and either not realizing these are two separate things, or not addressing them as such.

                      1. 3

                        I see the distinction and am saying most programmers neither know nor care about Alan Kay’s definition. I may personally find it a very sensible definition by which to organize and orchestrate a complex program, but what good is it if most people hear “OOP” and think classes?

                        1. 1

                          I see, so instead of dealing with with ambiguity head-on, you choose to deny it exists and absolutize the definition of your choice. I don’t think that’s a particularly helpful approach here.

                          For instance, a cursory search of OOP articles on lobsters will show you that this comes up a lot.

                          1. 1

                            I think you misunderstand. I acknowledge the ambiguity and I actually share your preference for Alan Kay’s definition. The popular definition is not my choice. I am merely recognizing that the English language tends to towards descriptiveness rather than prescriptiveness. That means that if most people mean classes when they say OOP (“data in the form of fields (often known as attributes or properties), and code in the form of procedures (often known as methods)”), that is what it means, regardless of what you or I prefer.

                            1. 2

                              Rather than repeat my earlier criticism, I will simply point out that the most popular OOP language of all time is prototype-based, not class-based.

                4. 11

                  My personal “just so” story around OO is something of the following:

                  OO is a cultural phenomenon arising from the convocation of a bunch of good theoretical and practical ideas in program and language design and a group of people, particular technologies, projects, and businesses all communicating around those ideas.

                  If we broadly strip away the cultural aspects—it strikes me as often being what people are looking for when they ask about removing baggage—then you still have all these good technical ideas. They’re well-studied under a bunch of names both inside of and outside of the OO culture.

                  • State hiding, encapsulation, public/private interfaces are deeply popular ideas both as an organizational principle (even in C one might tend to put related functionality in a shared file and anticipate that some functions are meant not to be used publicly, possibly enforced by making them static and separating translation units) and as a semantic one (mathematicians have been using them since the 1920s with Skolemization).
                  • With encapsulation we need to consider how independent units communicate and “concrete message passing” synchronously or asynchronously is a pretty powerful idea, boiling down to (a) identification of public APIs and the properties/promises they uphold/make and (b) restriction of the kind of data that can be sent over those interfaces. This is a huge place to play with lots of great properties that work well in different domains.
                  • Late binding, the idea that fragments of code rely on unspecified meanings for their free variables and thus those free variables can be continuously overridden up until the point of execution, is interesting as a method of code organization. It is at least strongly challenged today by ideas like “composition over inheritance”.

                  I also think one of the most important technologies from OO is the obj.<autocomplete> workflow, and organizational principle for “where” functionality is located that was really amenable to IDE support. There are other ways to do it, but I think OO achieved a lot of cultural success due to the magic of being able to explore a codebase by instantiating objects and “asking” them what messages they respond to.

                  These days, people invest a lot more in documentation and LSPs and all of that, but this was an early win for discoverability of programming APIs and I don’t think that can be understated.

                  1. 3

                    It is at least strongly challenged today by ideas like “composition over inheritance”.

                    I don’t think composition over inheritance challenges late binding. E.g., in Rust, which does not have inheritance, has so-called “trait objects”, which are fat pointers to both the concrete object and a vtable for the interface (the trait). This works very much like how late binding is often implemented in OO languages. Nothing stops one, in Rust or other languages, from composing late-bound objects.

                    1. 3

                      I think that’s a good point, though I want to split that hair a bit.

                      One purpose for this kind of technology is dynamic dispatch, the ability to interact with a public interface without knowing concretely what is implementing it. This enables late binding because it makes the implementer opaque, letting us defer that decision as long as possible.

                      Another purpose for late binding is to enable the layering of functionality a la specialization or inheritance. This specifically allows self to be abstract until final instantiation.

                      Despite the technology behind the two being similar, I want to distinguish them strongly. The former is a kind of “external” late binding which strengthens the power of public APIs and the latter is “internal” late binding which serves mostly as a tool for constructing complex behaviors modularly.

                      In my view, the latter is what’s being criticized by “composition over inheritance” and it’s also often what people are referring to when they discuss late binding in an OO context.

                      On the other hand, “external” late binding is very popular, well-regarded, and, in my own view, a critical tool for reasoning about systems. To that end, I’d also put generics under this header. While they don’t enable runtime late binding, they do let you reason about code without assumption about the concrete implementation of a given API.

                      1. 1

                        Another purpose for late binding is to enable the layering of functionality a la specialization or inheritance. This specifically allows self to be abstract until final instantiation.

                        Just so we’re on the same page, do mean stuff like methods on the base class calling methods on self that subclasses may override? Because in that case, you can do that with interfaces too, can’t you? If you’re referring to something else please clarify.

                        In my view, the latter is what’s being criticized by “composition over inheritance” and it’s also often what people are referring to when they discuss late binding in an OO context.

                        Huh. Maybe we run in very different circles, but in my mind, “late binding” is mostly about message passing, that is to say, knowing the interface of the object you’re invoking without knowing the concrete implementation, external late binding.

                        1. 3

                          I agree that interfaces are very similar. Especially interfaces with default implementations.

                          Materially, there’s a big difference, I feel, between an interface specialization and an implementation specialization (i.e. class inheritance). I’m struggling to draw a bright line in theory, but in practice the encapsulation that is achieved with the former is a lot higher than the later.

                          When I think of (internal) late binding I think of open-recursive data types, specifically how a method finds its self-reference and at what point that recursion is “sealed”, becoming concrete. Late binding occurs when that self reference is unresolved until the last possible moment (instantiation or even the method call itself!).

                          That’s something of a different problem, I believe, than what you reference (though I appreciate the naming could apply to either). I could see “external” late binding as modeling even something like a HTTP request where you can swap out the particular server that responds or even the actual algorithm that is generating the responses.

                          Even with Rust traits when you have a specialization hierarchy you always know exactly what is meant by self. In that sense, the self-recursion is bound fairly early (definition time). The screwball is default implementations, but those aren’t as strong as OO-style “internal” late binding because they can only rely on the (public) interface of the trait itself or its super-traits.

                          1. 2

                            Great comment, thank you. I don’t have much to add.

                            Even with Rust traits when you have a specialization hierarchy you always know exactly what is meant by self. In that sense, the self-recursion is bound fairly early (definition time).

                            Technically speaking, you can do impl dyn Trait {} in Rust to create a default implementation that can’t possibly know anything at about self until runtime. But that’s rather academic and I think doesn’t really help you do any of the stuff you talked about.

                            1. 2

                              Oh, haha, that’s a great point. I don’t see that too often, but it’s a good trick at times.

                              Genuinely, I think this “internal” late binding is cool, but niche. It easy to make it confusing, even if the principles aren’t terribly difficulty.

                              At a high level, I could simplify this my whole position by saying “OO explored information hiding in big systems, and we learned even more strenuously that good kinds of information hiding are important”. I think internal late binding is a weird escape hatch from that realization and I genuinely agree with most efforts that seek to minimize it.

                              And then, yeah, sometimes you need impl dyn Trait. Maybe some truly enlightened API thinking could show you how you could have avoided it, but also it’s nice that it’s there for you.

                              1. 1

                                I can’t actually think of any situations where you would need impl dyn Trait. Got any examples?

                                1. 2

                                  Ha, my immediate reaction was actually to impl Box<dyn Trait>, which is more immediate. I think I have seen impl dyn Trait though in docs.

                              2. 2

                                One more quick note about really cool uses of “internal” late binding. In Erlang there are two ways to call a method, by reference and by name, where the “by name” calling will literally ask the runtime to return the thing stored at that name/module pair (or something more optimized, I’m thinking semantics not implementation).

                                This is late binding to the extreme! And in a system as dynamic as ERTS it can be a massive, massive footgun.

                                It’s also the mechanism that allows Erlang hot reloads. Which are stupidly impressive. Impressive in that you can literally achieve zero-downtime upgrades even with no redundancy. And stupid because we’re in an era where the dominant wisdom is you can only achieve consistent stability by killing-and-rebuilding at basically every opportunity.

                                I’ve never operated an ERTS system that used hot reloads, personally, so I don’t want to comment much on it. But I do think people like and swear by it.

                                1. 2

                                  It’s also the mechanism that allows Erlang hot reloads. Which are stupidly impressive. Impressive in that you can literally achieve zero-downtime upgrades even with no redundancy. And stupid because we’re in an era where the dominant wisdom is you can only achieve consistent stability by killing-and-rebuilding at basically every opportunity.

                                  I used to do this in Common Lisp, which can also do extreme late binding of names. For my cobbled together Discord bot, I would often add new functionality by just compiling in a few new functions and hot reloading the dispatch logic. One time I tried to completely re-architect it without downtime, but I made a mistake and the whole thing exploded. Though in development as opposed to production environments, it can be very nice even if it explodes sometimes. Completely redefining a game’s rendering loop while it runs is pretty fun.

                                  1. 2

                                    Oh yeah, in game dev hot reloading is so powerful.

                                    The ERTS hot reloads are wild though since you’re likely to build systems where your actors are maintaining long-term customer state that can’t be reconstructed. To that end, the OTP libraries have baked in callbacks for logic to serialize and hand over live state. It makes the actor model well since most live state will be housed in the reloadable actors, but it definitely is a scary system to read about.

                    2. 10

                      Van Roy & Haridi CTM book argues OO is a specialized paradigm that excels at modeling GUIs.

                      OO as in the actor model is also excellent at concurrent & distributed systems. Right now, Van Roy is pretty active in Erlang.

                      1. 7

                        The Newton folks claimed that GUIs needed two kinds of OOP. They argued that the class-based OO was good for models, where you would have a lot of the same kind of thing. For example, paragraphs of text in a word processor, people in an address book, and so on. In the traditional MVC model, they argued that the controllers exist because you really want a slightly different version of a view. For example, a one-off GUI widget for entering a phone number, which has to validate that the input is a particular format and then update a model, is annoying to implement as a custom subclass of a text view. They proposed using prototype-based OO for these, because prototype-based OO naturally encourages creating objects that behave almost like others.

                        I’ve yet to find any evidence that they were wrong. Even if you move to a model-view-update style GUI, I think their points still stand.

                        1. 4

                          Some (possibly circumstantial) evidence against this theory: in Javascript, which is heavily used for building GUIs, and which has prototypal inheritance, most tooling is moving strongly away from OO-style code and towards more closure-oriented patterns. React hooks are the quintessential example here, but the same patterns now show up in Vue, SolidJS, Svelte, Preact etc. The only mainstream framework sticking to the more traditional OO-esque syntax is Angular.

                          Like I said, this isn’t definitive proof that OO doesn’t work for UI (and the fact that the DOM API is still defined in a largely OO manner with plenty of inheritance is a good counterexample). But I also find it significantly easier to compose behaviours and maintain codebases using the closure-style syntax (regardless of framework), and I think there’s a good reason why so much of the frontend world switched over to this style in a relatively short amount of time.

                          1. 3

                            Cocoa + ObjC is an interesting example, because ObjC pretends to be a class-based OOP language, but the ObjC runtime is much more flexible, and almost a prototype-based OO, with full reflection support.

                            Cocoa makes great use of that for attaching bindings to objects by replacing implementations of their setters and getters dynamically (per instance, not per class!), as well as translating events to calling methods by name on whatever object in the hierarchy happens to implement one.

                            This made it possible to have Interface Builder be a GUI for making GUIs, with drag’n’drop declarative connections between models and their views, while using the language runtime for it, and being compatible with programmatically-created objects too, without too much GUI-specific layer in between. That was a pretty neat way to build GUIs back when computers were single-threaded.

                            1. 1

                              but the ObjC runtime is much more flexible, and almost a prototype-based OO,

                              I’m not sure what you mean by this, but you can’t add methods or ivars to an individual object. Since OS X 10.7, the Apple runtime has a notion of ‘associated objects’, which are kind-of like adding ivars, though with a very different implementation.

                              In the GNUstep runtime, we have a slightly more prototype-like model. We half-borrow an idea from V8, which does a hidden class transform to use classes to implement a prototype-based model where each object has a class and objects with the same ivar layout and methods share the same class. In the runtime, we provide the building block for this in the form of hidden classes, which don’t show up when you use most of the reflection APIs. We use this to implement the locks for @synchronized and associated objects and have used it for a simple JavaScript-like language that uses the same methods.

                              None of this is really how you’d design a runtime for a prototype-based language though. It isn’t differential inheritance, there’s no way to clone an object as a copy-on-write view of another object. In Self, for example, you create a new object by cloning an existing one and then properties and methods that you assign to that object are unique to it, properties and methods that you don’t assign come from the prototype (or, rather, one of the prototypes: Self supported multiple prototype chains, which was all sorts of fun to implement efficiently).

                              Cocoa makes great use of that for attaching bindings to objects by replacing implementations of their setters and getters dynamically (per instance, not per class!),

                              I’m not sure about the most recent implementations. This is how we implemented KVO on WinObjC, by building on the GNUstep runtime’s hidden class support, but Apple’s implementation at least used to pivot the methods on the class and then to a set lookup to see whether it should trigger the hooks. They may have replaced this with one that added a new subclass for KVO, but that is visible in reflection.

                              I think 10.7 also introduced the imp_methodFromBlock call, which let you generate a trampoline that wrapped a block as a method (moving the receiver over the _cmd argument, the block over the self argument and then calling the block’s invoke function). This is useful because it allows the block to own the set that contains weak references to objects that have the hook installed, rather than needing two lookups to find it.

                              This made it possible to have Interface Builder be a GUI for making GUIs, with drag’n’drop declarative connections between models and their views, while using the language runtime for it, and being compatible with programmatically-created objects too, without too much GUI-specific layer in between. That was a pretty neat way to build GUIs back when computers were single-threaded.

                              Interface Builder went all of the way back to NeXT and predated KVO by over a decade. I always felt it was a bad name because building interfaces was not really what it did. It creates serialised object graphs. Some of those objects are views.

                              Nothing it did was related to prototype-based OO though. It was purely a class-based tool.

                              1. 1

                                Thanks for the corrections.

                                My memory of this is hazy. I remember looking at ObjC runtime’s API and being surprised it allows a lot of stuff that the language syntax doesn’t hint at.

                                ObjC2 made .field call setters and getters, and with bindings that worked as an illusion of it being directly bound to the UI.

                                1. 1

                                  Yes, that was a mess. Objective-C had the philosophy that new semantics always came with new syntax. The property accessor notation broke this, and it was just syntactic sugar. a.foo and [a foo] were equivalent, but if you did a.foo += 2 that was equivalent to [a setFoo: [a foo] + 2] and the fact that it was two message sends was hidden. The same for the array access things.

                                  Bindings didn’t use that at all. They used key-value coding (KVC) and key-value observing (KVO). KVO was fairly complex to implement but KVC was quite simple. The -setValue:forKey: method in NSObject would look for a method with the name matching the key and call it if it existed, look for an ivar if you had opted into this, or call a fallback method that let you dynamically implement arbitrary keys (for example, in NSMutableDictionary, every key could be set as a dictionary key via this mechanism). This was a totally different mechanism that gave a consistent interface to things exposed via accessors, ivars, or other alternative storage. The key-path variants just split the string using dot as a separator and did the same lookup on each component in the chain.

                                  Cocoa Bindings were a set of generic controller objects that were configured by transforming delegate methods into accesses to key paths. They were never implemented in UIKit because, while they were very nice when they worked, they were incredibly painful to debug. If you got a key path wrong, there was nothing to stick a breakpoint on to find out what was going wrong, you just got views not updating. It was usually less effort to write the delegate and data source yourself than debug the one using bindings.

                                  Interface Builder didn’t rely on bindings, it just connected ivars with IBOutlet in their declaration and methods with IBAction together. The nib files contained the instructions for these and the nib loader used the runtime’s APIs to set the ivars when it created the object graph.

                            2. 2

                              I’ve long felt the same and this is why I have experimented with javascript on widget systems so much, and tend to use virtual methods and delegate properties (aka function pointers) a lot to simulate that prototype inheritance in D sometimes.

                              Glad to hear I’m not the only one. It’d always feel silly to go through the whole subclass dance when i just want something ever so slightly different for this one specific case.

                          2. 7

                            Throwing out all of OOP’s baggage, without the boiler plate and corporate bureaucratizing, done right, what does it have to offer?

                            An introduction like that indicates that you’re approaching the topic with a number of preconceptions that are more language- and library-specific than conceptual. OOP has been hugely successful, and the result is that the targets of things to dislike are extremely well known. As Bjarne said, “There are only two kinds of languages: the ones people complain about and the ones nobody uses.” But that doesn’t mean that the complaints are invalid – just the opposite!

                            What domains or situations lend themselves to organizing code via objects?

                            Herein lies the problem with the general conversation on this topic: You’re looking for some magic “gotcha” or some brilliant epiphany, but there is none. Not because OOP isn’t good at some things, but because it’s generally good at everything. Modern languages tend to be plotted in the first quadrant of a 2d plane with the two axes being some measure of FP and OOP; in other words, pretty much every language that has emerged in the past 35 years has some aspects that are functionally focused and some aspects that are structurally/class/prototype focused. Very few languages are “pure” FP or OOP, and those that are “pure” tend to be academic in nature, i.e. Bjarne’s “the ones nobody uses”.

                            I’ve mostly written in Lisps, then Go, Elixir, SQL, Factor and APL, exploring all paradigms besides OOP.

                            You need to read “For the Sake of a Single Poem”, from “The Notebooks of Malte Laurids Brigge”, by Rainer Maria Rilke. It probably takes 3-7 years of using a language “in anger” (e.g. 60-80 hours a week, in production, with outages that you’re responsible for) to learn a language at any level of real understanding. I’ve been coding more than full time for the last 45 years, and I’ve only learned a handful of languages that well: various BASICs, C/C++, SQL, Java/C#, and Ecstasy. I have programmed for fun and profit in 50+ languages, but it’s hard to really know and understand a language that you only do a few projects in, or that you only use sporadically. And trying to “learn” and “understand” OOP isn’t going to happen by dabbling in it.

                            As Rilke said:

                            … Ah, poems amount to so little when you write them too early in your life. You ought to wait and gather sense and sweetness for a whole lifetime, and a long one if possible, and then, at the very end, you might perhaps be able to write ten good lines. For poems are not, as people think, simply emotions (one has emotions early enough) – they are experiences. For the sake of a single poem, you must see many cities, many people and Things, you must understand animals, must feel how birds fly, and know the gesture which small flowers make when they open in the morning. You must be able to think back to streets in unknown neighborhoods, to unexpected encounters, and to partings you had long seen coming; to days of childhood whose mystery is still unexplained, to parents whom you had to hurt when they brought in a joy and you didn’t pick it up (it was a joy meant for somebody else –); to childhood illnesses that began so strangely with so many profound and difficult transformations, to days in quiet, restrained rooms and to mornings by the sea, to the sea itself, to seas, to nights of travel that rushed along high overhead and went flying with all the stars, – and it is still not enough to be able to think of all that. You must have memories of many nights of love, each one different from all the others, memories of women screaming in labor, and of light, pale, sleeping girls who have just given birth and are closing again. But you must also have been beside the dying, must have sat beside the dead in the room with the open window and the scattered noises. And it is not yet enough to have memories. You must be able to forget them when they are many, and you must have the immense patience to wait until they return. For the memories themselves are not important. Only when they have changed into our very blood, into glance and gesture, and are nameless, no longer to be distinguished from ourselves – only then can it happen that in some very rare hour the first word of a poem arises in their midst and goes forth from them.

                            That’s a lot of words to say: Deep understanding can take a great deal of time, and often comes from periods of extended effort and/or deep pain.

                            And the thing is: If a developer truly knows a language well, they can solve basically any problem in that language. (SQL somewhat excluded, although it’s far more powerful than most people – even most DBAs – realize.) That doesn’t necessarily make any particular language ideal for anything, but it does make that language ideal for that programmer who knows it well.

                            Object orientation is simply a means for organizing thought and code. Fred Brooks Jr. put it best:

                            Finally, there is the delight of working in such a tractable medium. The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.

                            Yet the program construct, unlike the poet’s words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.

                            Object orientation (“OOP”) is simply a means for designing and building those castles. Like any approach to engineering, it is fundamentally based on trade-offs, which means that it will prove to be more or less ideal for particular situations. But like FP, OOP exists on the plane of data structures (objects) and algorithms (functions), which means that no particular programming construct is out of reach in most modern OO languages.

                            OOP pushes design to be data centric, or “structurally centric”. Design of OOP systems is not dissimilar from ER design for SQL databases, for example; if you’ve ever used erwin software (I know, I’m dating myself here) to build large systems (e.g. 10000+ table databases), you’ll notice a lot of eerily similar issues and approaches to design and organization that appear in the process. Where OOP – as expressed in most modern OO languages – tends to be dramatically better is in design hiding, i.e. the ability to compartmentalize the design, perhaps by module, or interface, or enclosure, or via name-spacing, etc. And this is where good languages shine: The ability to enable the design of abstractions, while also enabling intricacies, and aggregates of intricacies, and aggregates of aggregates of intricacies, to be progressively hidden, thus freeing up large portions of the mind of the developer, enabling them to create both larger and (in theory) more beautiful castles.

                            The reason that this is so fundamentally important is that most humans have limited ability to hold state in their heads. When modeling one of these castles in one’s mind, it’s nice to be able to zoom in to a single piece of stone, or zoom out to the entire estate that contains the castle, or any degree in-between. This, to me, is the main benefit of most modern OO languages (i.e. not including Smalltalk or C++) over the procedural and structural forms that came before them.

                            And as with any engineering endeavor, it comes down to trade-offs. While it’s easy to criticize and hate on existing large “enterprisey” systems built in Java using Spring and Hibernate and 140 other libraries built with Maven / Gradle tested with JUnit / Mockito and so on, the alternatives tend to be … similarly complex. And the thing to appreciate (even with a grimace) is that these large complex systems exist at all – they actually made it to production, despite being assembled by throngs of mediocre programmers without sufficient design or proper organization. And these mostly-working, deployed systems have real world value to the companies that they serve, even when the systems evolved from something small and simple into hated beasts, large and complex.

                            What domains or situations lend themselves to organizing code via objects?

                            OO design and modeling is well suited for most domains and problems that I have encountered in business. There are times in small systems when OO is simply overkill, and a simple scripting approach (BASIC, Python, whatever) just seems much more appropriate.

                            When is storing functions as methods (i.e. in object namespaces instead of e.g. files) a better approach (to polymorphism?) (worth losing referential transparency)?

                            Obligatory must-read: Fundamental concepts in programming languages, by Christopher Strachey

                            The way that you ask the question indicates a basic lack of understanding. One does not “store functions as methods in object namespaces instead of files”. One does not even “store functions as methods”. The concept of a “method” is simply a model of behavior that is related to a defined unit of information, which is hardly an unusual concept with or without OOP.

                            What are the pros and cons of Go OOP vs. Rust vs. Java vs. Ruby vs. ObjectPascal OOP?

                            This is nearly unanswerable. You seek to take a complex topic, and request that it be boiled down to an Amazon review format. To begin to answer this, you would need someone who has used some pair of those languages, in anger, for probably a decade or more. Then they could begin to explain how certain problems are more easily solved in one versus another, even though the same problems can be more than adequately solved in either.

                            For example, there is a fundamental difference in approach between prototype-based OO languages and class-based OO languages, but I (not having actively coded in anger in a prototype-based OO languages for a continuous decade) am unqualified to answer even the “pros and cons” question for this comparison.

                            What I will tell you is this: I would take a brilliantly good and experienced Ruby programmer to build a solution in Ruby over a mediocre Rust programmer to build a solution in Rust, or a brilliantly good and experienced Rust programmer to build a solution in Rust over a mediocre Ruby programmer, because in either case the result will be well designed, working, and maintainable. (Substitute any two language names with critical market mass and reasonable future life expectancy.)

                            Languages are simply a tool, and the “OO” aspect is actually relatively minor in the scheme of things. More important than OO – over the past few decades and in my own experience – has been (i) a language’s approach to concurrency and (ii) a language’s approach to memory management. In this way, I’d suggest that a GC-based FP language and a GC-based OO language are likely to have much more in common than a GC-based OO language and a non-GC-based OO language, for example.

                            1. 7

                              Since you’ve used Elixir - it’s interesting to consider that an Erlang process, particularly a genserver, is essentially an object under Kay’s definition. Encapsulated state, message passing, even late binding in the sense that you “run a function on a process” by, conventionally, sending it a message that then triggers the process to run some code (Genserver style).

                              Check out Joe Armstrong’s answer in the transcript of this interview (scroll down to “is Erlang object oriented?”), it’s fascinating! The creator of the language himself moved his opinion from “Erlang is functional and not OOP” to “Erlang might definitely be OOP?”

                              https://www.infoq.com/interviews/johnson-armstrong-oop/

                              So one question would be - when did you use Erlang processes in Elixir? Because whatever reasons justified it then were arguably reasons you found to deploy OOP-style design to your problem.

                              I’m trying to reflect on that question myself after reading your post, so just thinking out loud, but I think I’ve used them generally to achieve, in a really abstract sense, some concept of containment/isolation?

                              Containing logic and state, containing failure, containing synchronous code (process message queues are processed synchronously).

                              Logic is probably the least “necessary” reason to employ a process, you could use a module with functions and some struct instead and treat the struct as opaque. A classic example is elixir’s MapSet - you never mess with the data structure directly, it’s really just a map, you only interact via the MapSet functions. But then, in a way, MapSet is sort of an object?

                              Anyway, I realize none of this answers your question, I more just wanted to share how using Elixir daily (given that you have too) has caused me to reflect on OOP myself because here I am in an ostensibly “non-oop” language and yet I’m thinking about similar principles as I design software. It’s fun!

                              1. 5

                                If the diamond problem never ever happens in a completely stable problem domain, then perhaps inheritance is the right idea? I think a lot of the best work in OOP came in GUI hierarchy programming, where the diamond issue can be kept at bay. https://en.wikipedia.org/wiki/Object-oriented_user_interface <– not 1:1 with OOP but they co-evolved imo.

                                1. 4

                                  Traffic Simulation. So that a vehicle like a Car does different behaviors for “stop_at_red” than a motorcycle.

                                  The first used of OOP was in a Simula simulation of traffic. That’s why the less informed tutorials always tried to make a Car class with four wheels. OOP models real world problems where we want to think of “the set of all vehicles on the road” but have different behaviors.

                                  Outside of physical objects, interfaces generally work better. That said, most languages do not have a construct of “pull all public methods of the Vehicle Interface” into my interface, but pull and error on conflicts. Otherwise we get “my_car.vehicle.stop_at_red()”.

                                  1. 8

                                    I think an interesting contrasting vision to that is the rise and partial success of entity-component systems in games. It’s primarily marketed as a method to achieve a better performing architecture, but it’s also a different vision of how to design behaviors across a set of agents. In that sense, it’s distinctly non-OO.

                                    1. 1

                                      but then you have the Astra portable bridge that can switch between being a vehicle and a bridge, and you need to switch to the strategy pattern.

                                    2. 4

                                      It’s been a while since I last looked into the history of OOP, but I remember that early object languages (SIMULA, CLU, Smalltalk) pushed the idea of abstract data types really really hard, moreso than other languages at the time (except maybe ML?). You could make the (not incoherent) argument that most modern languages have some form of traits/interfaces/typeclasses because of the OOP influence, just as how many modern languages have some form of first-class function and pattern matching because of FP influence. Not a lot of new PLT research is being done in pure object languages anymore, though.

                                      One more exotic place where I think OOP is useful: in systems where you are getting objects from “somewhere else”. Powershell, for instance, can read an object definition from an external source and add it to your running shell. Does nushell do something similar?

                                      EDIT: if we’re talking “when is inheritance useful”, I think the main place is when both the parent type and the child type are concrete objects you need to pass around and typecheck. One good example of this is in documentation generators like Sphinx: a LinkElement is also a TextElement, and you need to both types of nodes in a real document!

                                      1. 1

                                        I remember that early object languages (SIMULA, CLU, Smalltalk) pushed the idea of abstract data types really really hard

                                        I don’t think CLU is usually considered an OO language: it lacks inheritance, and its polymorphism is static rather than dynamic. CLU was the language that popularized abstract data types.

                                        As far as I know neither Simula nor Smalltalk do implementation hiding like CLU; types are not abstract because you can do a dynamic check on a value of a base class that reveals the concrete derived class. (Well, that makes more sense for Simula which is statically typed, but ykwim.)

                                      2. 3

                                        To me all the good stuff in OOP (and more) is captured in the actor model. I’m still not convinced that’s a good general-purpose paradigm though.

                                        1. 1

                                          not convinced

                                          Why not? It’s on the top of my list to research in light of the 1985 paper etc.: https://lobste.rs/s/zi7kdx

                                          1. 1

                                            A general-purpose paradigm would need to be useful outside the context of distributed systems I guess.

                                        2. 3

                                          When you’re not modeling an algorithm, but phenomena and concepts from a problem domain. Particularly those domains outside of the computing and data sciences, and computing infrastructure. Consider that the M in MVC was originally an object model.

                                          It is also ideal for modeling Abstract Data Types.

                                          It also remains the best way to model a framework that has default behavior that can be overidden. It was not originally invented just to organize code or act as a dumb data bucket modified by external code.

                                          To really get to grips with its original modeling concepts, investigate the original inventors (not Alan Kay).

                                          Franky, as far as domain modeling is concerned, such concepts are so uncool, misrepresented and unknown in the current world, where there is generally nothing else but procedural code (organised as subroutines) acting on external data, that I’m not sure you should bother. Plus ça change, plus c’est la même chose.

                                          1. 3

                                            OO excels at UI. DOM model, CSS classes, all modelled using OO and subtyping.

                                            1. 2

                                              I always say that web servers are OOP — big objects that encapsulate their state and interact with the outside world via message passing. OOP people tend to not like this.

                                              The thing you’re going to struggle with is that you can’t even get agreement on what the tools and concepts of OOP are, exactly. IMO real OOP is the naive set of promises that the early versions of Java made, and everything since then is an attempt to retcon after we worked out it was flawed.

                                              1. [Comment removed by author]

                                                1. 2

                                                  I learned programming through OOP, so it holds a special place in my heart. It’s a very natural model, if you care to stick with it and let it click. It gives you a framework for thinking about solutions in. You shape your loose unstructured thoughts into classes and interfaces that make logical sense, that are easy to test, etc. It’s the same as any other framework, in that you think in it, in that you can apply it well or poorly.

                                                  What domains or situations lend themselves to organizing code via objects?

                                                  Honestly I don’t think about it that way. I think the Ruby programmer and the Go programmer and the Java programmer, all of equal skill and proficiency in their respective languages, will model something out that looks and works more or less the same. Talking about solving a REAL problem here— so connections to databases, handling http interactions, etc. Whether or not it’s an OOP solution, the state has to go somewhere, things will communicate with other things to do work, there will be boilerplate for managing task execution, handling errors, so on and so forth.

                                                  OOP must be regarded as ancient and bad now I guess, and whatever I guess that’s fair, but I can guarantee you bad code will continue to be written, regardless of paradigm or language. My point is to just use whatever framework of thought the language is best suited for, it’s very likely what the whole language was built around— what justified creating an N+1th language in the first place.

                                                  1. 1

                                                    I wouldn’t dare to do any complex UI without Qt-like OOP