1. 33
  1. 18

    It was I! I killed OOP! I tell you, I did it! (dramatic music plays in the background)

    In all seriousness, I love all things OO. I have since I first truly understood the concept. One of my happiest and most memorable moments of my early career was when the OOP light finally “came on” and I really grokked what a super-amazing concept I had stumbled into.

    My problem is that OOA/D/P is best understood as a way of thinking about solving technical problems and not a bunch of hoops a language or codebase has to jump through in order to be “pure” OOP. We confuse the shape of the tool with the nature of the work. It’s as if we were to start calling fixing a roof as a type of “hammer work”. (OOP in this case is the hammer, so the example is reversed, but you should get the general idea)

    When I truly learned FP, I fell in love with it as well, not because it was some new, super-cool new tech. I loved it (as much as OOP) because it also was a completely new way of thinking about solving problems. Neither of these ways necessary involve computer languages or even programming. Those things are downstream. OOP is Platonic and FP is algebraic. That’s the two deep wells these ideas where these ideas come from and that’s where you should go if you really want to continue your professional journey.

    OOP will continue to be born and die as new generations of coders grow up professionally and begin to understand how to separate problem-solving from coding. Those that don’t will thrash around quite a bit over the course of their careers. Those that do will move on and all of this will seem kinda silly (and repetitious)

    ADD: Just to be clear, I read the essay and thought it was good. I’m not just ranting at random about FP and OOP. I think there’s a much larger case to be made about how these problem-solving paradigms can work in coding, but the framing of the problem here prevents that discussion. (Also, it’s a huge discussion and I seriously doubt you could have it in a lobsters thread. For now my goal was to both praise the information in the essay and rephrase the problem.)

    1. 3

      IMO the separation of code and data doesn’t make a design any less OO; the end result is a data object and a processor object (not unlike MVC).

      I do agree that as OOP languages evolve they become more functional. Good OO code is often very functional, because functional design imposes limits such as reducing side effects, which makes a program easier to reason about. That doesn’t mean the code is any less OO, unless OO is a synonym for “bad”; rather, as OO programs and languages evolve, they fall toward their functional destiny.

      In the end, we will end up with a plethora of languages where we can no longer tell whether the language is intended to be OO or functional. You can come at it from the functional side too, and end up building a better CLOS. Would a functional language that acquires OO traits be any less OO than an OO language that acquires functional traits? It is my belief that both are heading toward a shared destiny.

      1. 13

        the separation of code and data doesn’t make a design any less OO

        🤔

        Anton van Straaten:

        The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said “Master, I have heard that objects are a very good thing - is this true?” Qc Na looked pityingly at his student and replied, “Foolish pupil - objects are merely a poor man’s closures.”

        Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire “Lambda: The Ultimate…” series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.

        On his next walk with Qc Na, Anton attempted to impress his master by saying “Master, I have diligently studied the matter, and now understand that objects are truly a poor man’s closures.” Qc Na responded by hitting Anton with his stick, saying “When will you learn? Closures are a poor man’s object.” At that moment, Anton became enlightened.

      2. 2

        I don’t have much to say about the article as a whole, but there is this:

        Java don’t really support messages. You can call methods of course, but those are just as synchronous as ordinary function calls.

        I beg to differ—Java (and in affect, all languages) do support messages, albeit synchronous message passing. That isn’t unusual though—QNX only supported synchronous message passing between processes for instance.

        1. 1

          From your linked article:

          […] And message passing was purely synchronous. You could do asynchronous message passing, but it required multiple threads to do so.

          I wouldn’t say spawning a new thread to make a function call is the same as asynchronous message passing.

          1. 3

            Smalltalk did the same thing. According to the Smalltalk-80 book, the message passing interface is implemented via synchronous method calls, so that receiving a message was the same as invoking a method.

            1. 1

              If all you have is synchronous message passing (ala QNX), and you want to keep your programming going, then you need to spawn a thread to do the message passing.

              1. 1

                AIUI synchronous message passing ala QNX doesn’t involve a function call into the receiver.

                1. 1

                  Message passing is just passing data to some code. Function calling is just passing data do some code. The mechanics are different, but the conceptually, they aren’t that different. Message passing might involve copying data from sender to receiver. Calling a system call might involve coping data from caller (sender) to the OS (receiver). Again, not conceptually different.

                  1. 1

                    I agree, message passing and function calls are conceptually similar.

                    They are also functionally different enough to have earned different terms.

                    There are many things conceptually similar but functionally different in computing. I would similarly call out saying SQL databases and filesystems are the same. Or quicksort and sleepsort.

                    1. 1

                      No argument here (I did the SQL/filesystem debate 25 years ago). As Alan Kay is reported to have said: “A change of perspective is worth 80 IQ points.”

                      1. 2

                        “A change of perspective is worth 80 IQ points.”

                        Question is, in the positive or negative direction?

                        1. 1

                          It’s implied in the positive direction.

          2. 2

            If actor models aren’t OO then what is? Naming an abstraction and sending instances polymorphic messages? Doesn’t get more OO than that

            1. 3

              Actor models are “OO done right” as opposed to “popular OO” seen in C++ and Java.

              1. 1

                I read, “actors are OO, but people don’t call them that”, as proof that OOP as a phrase is in decline (paraphrasing)

                1. 1

                  I feel like most of the time, when people say OO they really just mean Java.

                  Python is OO and has had lambdas and closures for a long time, and that doesn’t make it any less OO. It’s good that we have different paradigms working together, instead of trying to beat each other.

                  1. 2

                    Python is OO and has had lambdas and closures for a long time

                    Smalltalk was the language created by the person who coined the term ‘object oriented’ to embody the style of programming closures and message passing (for method invocation) were the only form of flow control flow that it had. An if statement in Smalltalk is implemented by sending an ifTrue: or ifFalse: message with a closure as its argument to an object (typically a boolean object), which then either invokes the closure or doesn’t, depending on its internal value. Loops are built in a similar way.

                2. 2

                  I find William Cook’s writings about OOP to be the clearest, and shows how it is a different way of thinking about abstraction. If any one is interested in understanding the choices made, the accompanying paper and presentation is also quite useful and approachable.

                  1. 1

                    Like most discussions of a paradigm, this one fails to delineate between different types of software, and the merits or disadvantages of the paradigm applied to them. It also contains too many oft-repeated inaccuracies about the paradigm and its history.

                    For example, OO is particularly well suited to modeling Abstract Data Types. Likewise it is particularly well suited to development and runtime frameworks. It can be suited to the modeling of a complex problem domain independent of UI, persistence and plumbing, but new (and old) developers sadly these days are left mumbling about nouns, verbs and usecases when standing in front of a whiteboard.

                    It is not necessarily well suited to trivial models or problems, or those that are predominantly about the transformation of data. Neither is necessarily well suited to the purpose of just being about “organizing code”.

                    Don’t know where or how to use a major paradigm much of the software world depends on? Maybe consider that the problem is not the paradigm, but you?

                    1. 1

                      Yes! Now do Node! :)

                      1. 1

                        This was the second death. The coffin soon followed, nailed by Java’s generics: suddenly we hardly needed Object any more.

                        This was a bit of a light bulb moment for me. I’ve only ever had generics folded into a language (like Java or Python) I’m already intimately familiar with, so I never really understood the draw except in very abstract terms. Now I do.

                        1. 1

                          There is an easier way: don’t mutate state. Always construct new stuff, never throw away anything. Then let your smart pointers or your garbage collector take care of unreachable references.

                          This section reminds me a lot of what Clojure is doing. What I don’t really understand here, and I hope somebody can explain a bit, is how this helps?

                          If I have a thing t and I want to do two actions a1, a2 to it, if I don’t have synchronization I might end up with t.a1().a2(), t.a2().a1(), t.a1(), t.a2(). That doesn’t seem useful, so I still need synchronization.

                          1. 2

                            The synchronization only matters if there is a side effect somewhere (side effects can be things other than mutation, notably I/O).

                            With no side effect, what happens in one thread cannot be observed by any other thread, and so no synchronization is needed there.

                            1. 1

                              The context was mutating objects from multiple threads, though. Otherwise you wouldn’t need locks in “regular” OOP either.

                              1. 1

                                Right, and in the context of mutating objects from multiple threads the proposed solution is “don’t mutate at all”.

                                I agree the connection to a discussion of OOP is tenuous, since nothing about OOP requires mutation.

                            2. 2

                              You don’t “do actions” to a thing. That is the point of the quote. If something becomes different than the original thing, then it’s something else.

                              If you have a data structure and add a node to it. Ideally, in a high level programming language, this would be another value. The concept of rewritable variable is rather useless in high level languages. It is much more unambiguous to think of something as a new value of that thing is changed and let your compiler/interpreter optimize memory usage. Like clojure does.

                              The problem with languages like C++, java, C#, etc. Is that they lack a well defined design goal. They claim to be high level, while mixing and matching a bunch of lower level functionality everywhere. Not that they are not useful or powerful. It.s just that things get messy real quick.

                              1. 1

                                The concept of rewritable variable is rather useless in high level languages.

                                It really depends on what you mean by “high level”. If “high level” means closer to the abstract specification, then the language should have features to model “reality” more closely, and in “reality” things mutate. It’s easier for a programming language to say “Employee.give_raise creates a new Employee object”, but in the real system, it’s still the same employee, just with a higher salary.

                                1. 1

                                  I disagree. Then what you are modeling, really is the state of the employee at a given time. Hence the snapshots are distinct.

                                  1. 3

                                    You’re still thinking like a programmer. The employee, in reality, has just one state that changes. We choose to model them as a sequence of snapshots, because that has a lot of empirical benefits (like being able to ask “when did their salary last change, and what was it before?”)

                                    That we have to model the sequence is arguably a sign that our languages aren’t high-level enough: we should be able to say “Change the state of the employee” and it handles snapshots and bitemporality for us. But that’s too high level presently, so we have to implement the snapshotting systems ourselves.

                                    1. 3

                                      Sure, and that’s exactly what object oriented programming languages don’t do. That’s exactly my point.

                                      I would happily write code such as employee.raise() if I had an easy and non ambiguous way of navigating all state changes in all my program. But I don’t. Such coding style becomes a huge pitfal.

                                      C presents itself as a tool to read, write and move stuff around in memory. You work with the raw bits directly in their memory locations. It doesn’t offer the promise of a high level language like above mentioned languages. Which in my opinion, fail to fulfill it.

                                      Datomic implements the functionality you mention.

                                      1. 2

                                        That we have to model the sequence is arguably a sign that our languages aren’t high-level enough: we should be able to say “Change the state of the employee” and it handles snapshots and bitemporality for us.

                                        Is that really the responsibility of the language, though? We generally make a distinction between data structures and language paradigms, no matter what language we are using. What you’re describing is a pretty straightforward use-case for event-sourcing, and you can do ES in practically any language paradigm I can think of.

                                        …probably not Prolog. But it works equally well in OOP and FP.

                                        Combining your programming language and data structures too closely is how you end up with Pick or MUMPS.

                              2. 1

                                OOP will never go away because it is modeled after natural language.