1. 27
    1. 12

      The main problem with OOP is that you lose the ability to batch operations, which is the primary method of making computers go fast.

      1. 4

        Please elaborate

        1. 5

          Not the original commenter, but I think I can explain. First - this has nothing to do with the article, but I believe is true, and part of my dev philosophy as well.

          I have myself rewrote multiple systems/services into a “data oriented” approach. Instead of using “objects” for what is the bulk of what program is working on (“data”), you use … plain arrays, maps, slotmaps with plain data inside them and whatever other datastructures you need to support efficient execution of operations required, just like the data is … a database, and not a graph of objects pointing at each other. This leads to simpler code, waaay better performance due to cache locality, processing things in batches, avoiding indirection, etc.

          The “OOP” can/should be used for “logic”/“resources” (what your program does), and not “data” (what your program is processing). More on it: https://dpc.pw/posts/data-vs-code-aka-objects-oop-conflation-and-confusion/

          In the original article the polymorphism on the logical resource (“logger”) is considered “OOP” which is a right thing to do, but only a tiny part of what “OOP” colloquially means.

          1. 3

            I believe we have very similar sensibilities here — I tend toward a strong emphasis on FP and data-oriented code in synthesis with OOP.

            I would agree that OOP-maximalism has tended toward bad architectures, but I would dispute any characterization of this being inherent to the idea. Rather it is a useful way to model certain things, but insufficient on its own.

            What I want clarification on is specifically “lose the ability to batch operations” which makes no sense to me.

            1. 5

              What I want clarification on is specifically “lose the ability to batch operations” which makes no sense to me.

              I’ll give a concrete example to augment dpc_pw’s reply.

              I often work with large datasets of 3D points (with other data too) stored as homogenous coordinates and apply transformations to them (eg a 4x4 transformation matrix). Bulk apply a transformation to them can be done different ways. If those Point3 objects are stored on the object itself, it’s straightforward to iterate over the collection of them and do something like:

              for(auto& obj: objects) {
                  obj.transform(T);
              }
              

              That works fine… but each loop through that I’m dereferencing an iterator, potentially looking up a function call in a vtable, making a function call, doing a (4x4) x (4x1) matrix multiplication, storing the result back into memory, and doing the same thing for the next object.

              If I don’t go the conventional OO route and decide to store all of my points in a single large matrix (maybe keeping a column index in an object somewhere for looking up the point), instead of doing N x (4x4) x (4x1) matrix multiplications I get to take my transformation matrix and do a single (4x4) x (4xN) matrix multiplication instead. As far as the actual multiplication goes, it’s still going to come out to be the same number of float-float multiplication operations but:

              • I now have the overhead of only making a single function call instead of N function calls
              • The values that it’s going to be working on will be stored next to each other in memory and will result in significantly fewer cache misses
              • If N is large enough, I can offload that multiplication to a library that’s going to do it multi-threaded or potentially on my GPU instead of doing it sequentially.
              1. 3

                Don’t forget SIMD!

              2. 3

                What I want clarification on is specifically “lose the ability to batch operations” which makes no sense to me.

                Since the data (or parts of it) are hidden (“encapsulated”) it is near impossible to organize the mass computation on them efficiently (“processing in near-linear fashion as cache-local batches”).

                I would dispute any characterization of this being inherent to the idea.

                I came to a conclusion that “OOP” primarily just a bad metaphor. Software is nothing like physical objects, even if does decompose into smaller things and “has state”. That’s why from the start it meant different things to different people. (Late binding? Message passing? Encapsulation? Interfaces? Classes? Inheritance?) and why every OOP book I read was just another confused interpretation of the term, and every explicitly-OOP-project I’ve seen a disaster.

                … synthesis with OOP

                Discussing “OOP” as a single idea is just confusing everyone involved. The “decomposing logic into isolated components, usually with their own state, behind well defined dynamically polymorphic interfaces” is universally very useful when applied pragmatically, the rest of stuff from “OOP” umbrella should to be discussed and considered separately.

            2. 4

              The succinct version is it’s an AoS vs SoA type of issue. OOP methodology wants “Objects” at the base layer of things. So if you have a collection of such objects, you’re basically ending up defaulting to an array-of-structs model. Struct-of-arrays in many (but not all!) scenarios has superior perf/scalability and less memory waste, but normal OOP design is not very amenable to it. AoS_and_SoA

              1. 2

                Worse, array-of-pointers-to-structs if you’re in Java or want dynamic polymorphism.

            3. 1

              Useful koan

            4. 10

              Yeah, it’s not that bad. But it’s not that good either. A lot of the sins of Java-ish OO languages come from trying to exclude all other ideas, which of course didn’t work that well. And then they just kinda piece things together and pretend it’s always a good idea.

              The thing is that style of OO is actually at least three different ideas: Extendable structs, subtyping, and type-driven dynamic dispatch. The feature they’re really using in the logger is dynamic dispatch, and as they outline in Option 2, doing that in Haskell is pretty awkward. I dunno Haskell well enough to comment beyond that, but in Rust I think you can get what they want without using dyn Trait for dynamic dispatch by just using an enum/sum type.

              There’s various tradeoffs to be argued in terms of extensibility vs. other functionality with that kind of design, but like many toy examples of OO theirs contains hidden footguns: the LogAboveSeverity logger implements Logger rather than inheriting from it, which looks like a Dart nuance that idk if you can trivially implement in C# or Java without having to make Logger an interface from the start. Meanwhile _LogAboveSeverity inherits from SimpleLogger to reuse its impl… and thus isn’t interchangable with any other kind of logger except a SimpleLogger. (Should have used composition not inheritance from the start.) Then you have the FileLogger add the flush() method which may or may not be necessary in some contexts which means that, oh boy, you suddenly have to know a lot more about which actual Logger you’re using if you want to be able to use it correctly.

              So yeah. Look at how easy it is to do all this OO stuff! Oops, you did things in the style that OO makes easy and now it’s all done badly. If only there were some book on design patterns or such to teach me how to work around these bad assumptions built into the language…

              1. 1

                While you could get this with dyn Trait, part of the author’s claim is that the name of the type does not change if Logger is the concrete logger class or an abstracted interface. Prematurely abstracting makes code hard to follow. Rust intentionally wants the types to look different for their zero-cost / pay for what you use model. The author’s only claim is that refactoring the concrete logger to an interface does not require downstream usages in type position to suddenly sprout a dyn keyword or fail to compile.

              2. 8

                I’m glad the author clarified up front that they’re specifically talking about Java/C#/C++ instead of the more general concept of OOP. OOP is in a strange place between being a paradigm and being a very specific language specification (basically just meaning Java and anything like it). Even though OOP is put side-by-side with paradigms like procedural and functional, it’s really not spoken of in the same way. When you say procedural or functional, you’re usually not thinking of a specific language. When someone says “OOP”, you immediately think “oh, Java.” It’s even mutated from it’s original concept, the actor paradigm, which had to since adopt that name instead.

                Personally, I’ve never really used all the features of common OOP languages. When I used C#, I had my new class template make them sealed by default (not just for intent, but it also helped with performance by avoiding having to use a vtable if it’s a base class). I much preferred using interfaces to define common functionality, they just felt much more flexible and granular to me. I can probably count how many times I’ve made an abstract or base class in my life on one hand. I’ve even heard that OOP was pretty much designed specifically for GUIs and that’s really the only place the paradigm actually shines. Every other domain is a better match with another paradigm, like ECS for video games, functional for business software, and actors for servers.

                Not that OOP is particularly bad, as the author states. Rather, I feel like “OOP” is an abstract way to say “someone who is very fond of their Java hammer and has a bad habit of seeing everything as a class-shaped nail”. It’s the product of a lot of marketing and sponsored college courses making a lot of programmers who have ever only heard of OOP and nothing else, and they end up sounding like an uncultured Protestant who can’t grasp the concept of a religion not having a prophesied messiah. I mean, just look at the explosion of “how do I OOP in Rust?” comments we got over the last couple years. Not because OOP is such a golden standard of a programming paradigm that you’d never want to leave it, but because OOP was slathered everywhere at the height of it’s fad and we still have people to this day being told that it’s the only paradigm in the world because Java doesn’t encourage any other paradigm, and it’s “the way to do everything” in C++, C# and Python even though they are (barely) multi-paradigm.

                1. 8

                  The motivation of OOP is rather sweet, really. But the reality has tended to devolve into the gorilla-banana problem. You can quote the gang of four all you want, but almost every OOP codebase I’ve ever encountered in a professional setting has some kind of base class that operates like a kind of pseudo-global junk drawer that does not in any way resemble OOP’s intention of using objects in code as a metaphor or model for objects in the real world. This argument that at least OOP is statically typed does not convince me that it is a paradigm with settling for.

                  1. 7

                    This shouldn’t be called “OOP is not that bad, actually”, but “pure FP code is kind-a clunky, actually”.

                    1. 6

                      Pure FP is not clunky, it’s just that this author seems to have overlooked a very simple (and the obvious) solution:

                      fileLogger2Logger :: FileLogger -> Logger
                      fileLogger2Logger fileLogger = _logger fileLogger
                      
                      -- Here's another way to derive a Logger from a FileLogger
                      fileLogger2AutoFlushLogger :: FileLogger -> Logger
                      fileLogger2AutoFlushLogger fileLogger = MkLogger
                          { _log = \message severity -> do
                              logFileLogger fileLogger message severity
                              _flush fileLogger
                          }
                      
                      -- And here's an even more sophisticated relationship between FileLogger and Logger
                      fileLogger2AutoFlushLoggerAfterNLogs :: Int -> FileLogger -> IO Logger
                      fileLogger2AutoFlushLoggerAfterNLogs n fileLogger = do
                          logsSinceFlushRef <- newIORef 0
                          pure $ MkLogger
                              { _log = \message severity -> do
                                  logFileLogger fileLogger message severity
                                  doFlush <- atomicModifyIORef logsSinceFlushRef $ \logsSinceFlush ->
                                      if logsSinceFlush + 1 >= n
                                          then (0, True)
                                          else (logsSinceFlush + 1, False)
                                  when doFlush $ _flush fileLogger
                              }
                      

                      You can use functions like these to convert your FileLogger to a Logger and pass it to any function that accepts a Logger. In a typical application, this conversion usually happens somewhere close to your main, where you set up your loggers and convert them to Loggers and then they get passed around the codebase in that form.

                      The irony is that I think this example proves exactly the opposite of what is said in the title. All that class FileLogger implements Logger does is to implicitly define a relationship like fileLogger2Logger between FileLogger and Logger and you have to define a new class in order to define a relationship like fileLogger2AutoFlushLogger, whereas it’s just a function in Haskell.

                      1. 4

                        I was wondering why he said this:

                        Similar to the OOP code, the FileLogger needs to be a separate type:

                        Since a FileLogger can easily be converted to a Logger, it seems I was right to be confused. There’s no reason you couldn’t just construct the Logger directly as a logger which writes to a file, right?

                        The whole article kinda confused me, to be honest, and I was dismayed to see it be so highly voted on Lobsters. I guess most of the people voting are just upvoting because they think they learned something, but they’ve just been misinformed :(.


                        Edit: the site looked familiar, so I went and looked at his previous entries, and he has a post titled, “8 years of Haskell”, with an impressive Haskell CV (worked on deep parts of the GHC compiler for a long time, and other things). Did he have a massive brainfart to post this, or am I/are we just missing something?


                        As /u/friedbrice said on the /r/Haskell discussion of this submission:

                        This thing you gotta understand is that to translate between OOP and Haskell, you use this mapping:

                        OOP interface      ~corresponds to~> Haskell type
                        OOP class          ~corresponds to~> Haskell function
                        OOP instance field ~corresponds to~> Haskell function argument
                        

                        Haskell type classes do not map to any OOP concept because OOP languages really don’t have anything like Haskell type classes. The closest thing Java has to Haskell type classes is context bounds on generic type parameters. So, using Haskell type classes (and type class instances) to try to mimic OOP almost always leads to broken designs.

                        1. 3

                          There’s no reason you couldn’t just construct the Logger directly as a logger which writes to a file, right?

                          Exactly, that’s definitely where the author’s argument takes a wrong turn, but I wanted to focus on the part where it goes really wrong, essentially claiming that you can’t program against interfaces in Haskell.

                          I guess most of the people voting are just upvoting because they think they learned something, but they’ve just been misinformed :(.

                          Agreed, and probably a sprinkling of confirmation bias too.

                          Did he have a massive brainfart to post this, or am I/are we just missing something?

                          That’s a very good question… AFAIK, he’s been doing OOP for the last few years, I’m wondering if that’s a result of too much OOP exposure, wanting to shoehorn Haskell into an OOP shaped hole.

                          As /u/friedbrice said on the /r/Haskell discussion of this submission

                          That’s a nice comment, yeah and I agree with the general take away, but I’d personally tweak it a little:

                          • OOP interface => Haskell product type (record) with each field being a different function (not saying something very different, just being more specific)
                          • OOP class => Haskell function capturing variables and constructing such records (again the same thing, more specific)
                          • OOP instance field => Anything captured by such functions, including its arguments, but possible also things it defines/constructs (like the IORef in my fileLogger2AutoFlushLoggerAfterNLogs example.)
                          1. 2

                            My favourite comments are the ones talking about how this doesn’t apply to some non-OO languages like Go or Rust, or FP languages with decent module support like OCaml, and concluding that it’s therefore probably just something you can’t do (at least ergonomically) with Haskell. GHC Backpack modules support was also mentioned on Reddit, as if plain Haskell can’t achieve this.

                        2. 3

                          “pure FP code is maybe not kind-a clunky, actually” then? :D

                      2. 6

                        Similar to the OOP code, the FileLogger needs to be a separate type

                        Why not just

                        type Logger = Severity -> Text -> IO ()
                        type Flusher = IO ()
                        
                        mkFileLogger :: FilePath -> IO (Logger, Flusher)
                        mkFileLogger path = do
                            handle <- openFile path AppendMode
                            let logger = \severity message ->  
                                     Text.hPutStrLn handle (show severity <> "\t" <> message)
                            let flusher = hFlush handle 
                            -- you may also want to return a `hClose handle :: Closer` 
                            pure (logger, flusher)
                        

                        Then you don’t have to learn any OOP and your life will be much simpler.

                        1. 6

                          Maybe I’m missing something, but I feel like you can do most of this with interfaces in Go without doing any OO, and you wouldn’t even have the weird drawbacks of using inheritance. So I suppose the author’s point is more that this is unergonomic to do in Haskell.

                          1. 4

                            Yeah, nah. Working in a codebase where I have to constantly click “expand type hierarchy” is hell. Inheritance is a fancy wrapper around GOTO and meme instructions like COMEFROM. Determining what the the control flow actually is is impossible. The only way to know is to step through it in a debugger to be like “oh, THIS is the method that actually gets executed”. Total nonsense.

                            1. 3

                              Funny to see this post just near to the “Why I use KDE” which is pretty much a complete graphical desktop + full suite of productivity apps all based on Qt and thus pretty much entirely OOP from top to bottom

                              1. 3

                                Isn’t it more of a criticism of the ergonomics of existential types in Haskell? IIUC you can write the type annotations non-invasively in Rust (&dyn X) or OCaml (module X) without committing to the OOP paradigm.

                                  1. 1

                                    I did not understand, unfortunately the article. But got intrigued by the title :-).

                                    I wish OOP would clearly decouple class methods from ‘state-full variables’ from ‘constants’.

                                    manipulation of state-full variables, should be guarded. For example: state-full variables should have two Types defined for each state-full variable: primary type, and class-specific primary type exceptions.

                                    [unsigned int,[null,>300]] class_member_age;  //means age cannot be null, or over 300
                                    
                                    

                                    I know I am jumping all over the place with the above, but the point I am trying to make is that isolating particular behaviors (as methods, and as specific types) under an umbrella of class type, and then allowing inheritance of that machinery – is a useful abstraction that helps to write less code, and to declare your intention to other programmers working on the same code base going forward.

                                    1. 9

                                      My biggest grievance with what most people think of as OOP is that it typically doesn’t demarcate between plain-old-data objects and more complex behavioral objects. So for example, if you’re building a bookstore application, you end up with this Book class with a constructor that takes not only the metadata about a book, but also a database connection handle to support a Book.persist() method and an HTTP client to support a Book.fetchAmazonReviews() method and everything else anyone might possibly want to do with a book, which means that any time you want to write a test or other application code that creates a book for metadata purposes, you must also create the database connection and the HTTP client to connect. I think this is the problem Joe Armstrong was talking about when he said, “the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle”.

                                      And a lot of OOP enthusiasts will say that’s not true OOP, and you can write bad code in any language, but this isn’t very convincing because (1) virtually all of the early OOP books encouraged these design patterns (I remember reading a lot about the “Kingdom of Nouns” back in the early 2000s, albeit that was more about shoehorning inheritance into every problem) and (2) if you strip away all of the “bad practices” from OOP, you end up with something that looks pretty much indistinguishable from data-oriented programming–there’s nothing left that is “distinctly OOP”.

                                      1. 2

                                        This hits the nail in the head. Apply no true Scotchman to OOP, take it to the limit and you are left with not-OOP.

                                        The central idea of java, c++, c# is the class. Which by design, has a constructor so it can be instantiated, and holds both data and methods. This is what a software architect would want to avoid, rather than embrace.

                                        Personally I think OOP is mostly a bad idea that got hugely disseminated. There might be some accidental utility in some particular use cases. For example UIs, where a bunch of data structures with attached behaviors are better mental model than functions doing things. But I think the fundamental idea is weak and mostly results in degrading code quality very fast.

                                        1. 1

                                          My biggest grievance with what most people think of as OOP is that it typically doesn’t demarcate between plain-old-data objects and more complex behavioral objects.

                                          C++ enables to perform this check very simply with the various type traits, e.g. https://en.cppreference.com/w/cpp/types/is_trivial and https://en.cppreference.com/w/cpp/types/is_standard_layout

                                      2. [Comment removed by author]