1. 58
  1.  

  2. 22

    I don’t disagree that OOP can be bad, but it isn’t inherently bad. I also don’t see a real solution to this given here, aside from some sort of nebulous “example” of what this particular person is doing. IMO, if you want to present two different ways of doing something, show me some real code. Show me the perceived “wrong way” in OOP, then show me your “right way” in whatever you want to write. Otherwise it reads like a complaint with no solution.

    1. 1

      Posted elsewhere in this thread - but here’s a cppcon talk from an Insomniac Games tech lead (Mike Acton) that discusses some solutions.

    2. 20

      My sense now is that Alan Kay’s insight, that we can use the lessons of biology (objects are like cells that pass messages to each other), was on target but it was just applied incompletely. To fully embrace his model you need asynchrony. You can get that by seeing the processes of Erlang as objects or by doing what we now call micro-services. That’s where OO ideas best apply. The insides can be functional.

      1. 17

        “If you want to deeply understand OOP, you need to spend significant time with SmallTalk” is something I’ve heard over and over throughout my career.

        1. 5

          It’s also a relatively simple language with educational variants like Squeak to help learners.

          1. 7

            I have literally taken to carrying around a Squeak environment on USB to show to people. even experienced engineers tend to get lost in it for a few hours and come out the other side looking at software in a different way, given a quick schpiel about message passing.

          2. 4

            If you don’t have any Smalltalk handy, Erlang will do in a pinch.

            1. 2

              And if you don’t have Erlang handy, you can try Amber in your browser!

            2. 1

              I went through the Amber intro that /u/apg shared. I’d love to dive deeper. If anyone has any resources for exploring SmallTalk/Squeak/Etc further, I’d love to see them. Especially resources that explore what sets the OO system apart.

              1. 2

                I’m told that this is “required” reading. It’s pretty short, and good.

            3. 16

              I even wrote a book on that statement. My impression is that “the insides can be functional” could even be “the insides should be functional”; many objects should end up converting incoming messages into outgoing messages. Very few objects need to be edge nodes that turn incoming messages into storage.

              But most OOP code that I’ve seen has been designed as procedural code where the modules are called “class”. Storage and behaviour are intertwingled, complexity is not reduced, and people say “don’t do OOP because it intertwingles behaviour and storage”. It doesn’t.

              1. 2

                This.

                Whether the implementation is “functional” or not, the internals of any opaque object boundary should at least be modellable as collection of [newState, worldActions] = f(old state, message) behaviours.

                We also need a unified and clearer method for namespacing and module separation, so that people aren’t forced to make classes (or closures-via-invocation) simply to split the universe into public and private realms.

                To say that the concept of objects should be abandoned simply because existing successful languages have forced users to mis-apply classes for namespacing is as silly as the idea that we should throw out lexical closures because people have been misusing them to implement objects (I’m looking at you, React team).

              2. 5

                If there’s one lesson I’ve learned from software verification, it’s that concurrency is bad and we should avoid it as much as possible.

                1. 8

                  I’m not entirely sure this is correct. I’ve been using Haskell/Idris/Rust/TLA+ for a while now and I’m now of the opinion that concurrency is just being tackled at the wrong conceptual level. In that most OOP/imperative strategies mix state+action when they shouldn’t.

                  Also can you qualify what you mean by concurrency? I’m not sure if you’re conflating concurrency with parallelism here.

                  I’m using the definitions offered by Simon Marlow of Haskell fame, from Parallel and Concurrent Programming in Haskell:

                  In many fields, the words parallel and concurrent are synonyms; not so in programming, where they are used to describe fundamentally different concepts.

                  A parallel program is one that uses a multiplicity of computational hardware (e.g., several processor cores) to perform a computation more quickly. The aim is to arrive at the answer earlier, by delegating different parts of the computation to different processors that execute at the same time.

                  By contrast, concurrency is a program-structuring technique in which there are multiple threads of control. Conceptually, the threads of control execute “at the same time”; that is, the user sees their effects interleaved. Whether they actually execute at the same time or not is an implementation detail; a concurrent program can execute on a single processor through interleaved execution or on multiple physical processors.

                  While parallel programming is concerned only with efficiency, concurrent programming is concerned with structuring a program that needs to interact with multiple independent external agents (for example, the user, a database server, and some external clients). Concurrency allows such programs to be modular; the thread that interacts with the user is distinct from the thread that talks to the database. In the absence of concurrency, such programs have to be written with event loops and callbacks, which are typically more cumbersome and lack the modularity that threads offer.

                  1. 5

                    Also can you qualify what you mean by concurrency?

                    Concurrency is the property that your system cannot be described by a single global clock, as there exist multiple independent agents such that the behavior the system depends on their order of execution. Concurrency is bad because it means you have multiple possible behaviors for any starting state, which complicates analysis.

                    Using haskell/rust/Eiffel here helps but doesn’t eliminate the core problem, as your system may be larger than an individual program.

                    1. 10

                      All programs run in systems bigger than the program

                      1. 1

                        But that’s not an issue if the interaction between the program and the system is effectively consecutive (not concurrent), I think is the point that was being made. A multi-threaded program, even if you can guarantee is free of data races etc, may still have multiple possible behaviors, with no guarantee that all are correct within the context of the system in which operates. Analysis is more complex because of the concurrency. A non-internally-concurrent program can on the other be tested against a certain input sequence and have a deterministic output, so that we can know it is always correct for that input sequence. Reducing the overall level of concurrency in the system eases analysis.

                        1. 2

                          You can, and probably should, think of OS scheduling decisions as a form of input. I agree that concurrency can make the state space larger, but I don’t believe it is correct to treat concurrency/parallelism as mysterious or qualitative.

                      2. 3

                        Using haskell/rust/Eiffel here helps but doesn’t eliminate the core problem, as your system may be larger than an individual program.

                        They help in reducing the scope into the i/o layer interacting with each other. I think an example would be helpful here as there isn’t anything to argue for your stated position so far.

                        But lets ignore language for the moment and give an example from my work. We have a network filesystem that has to behave generally like a POSIX filesystem across systems. This is all c and in kernel, so mutexes and semaphores are the overall abstractions in use for good or ill.

                        I’ve been using TLA+ both as a learning aide in validating my understanding of the existing code, and to try to find logic bugs in general for things like flock() needing to behave across systems.

                        Generally what I find is that these primitives are insufficient for handling the interactions in i/o across system boundaries. Aka lets take a call to flock() or even fsync(), you need to ensure all client systems behave in a certain way when one (or more) systems make a call. What I find is that the behavior as programmed works in general cases, but when you setup TLA+ to mimic the mutex/semaphores in use and their calling behavior, they are riddled with logic holes.

                        This is where I’m trying to argue that the abstraction layers in use are insufficient. If we were to presume we used rust in this case, primarily as its about the only one that could fit a kernel module use case, there are a number of in node concurrent races across kernel worker threads that can just “go away”. Thus freeing us to validate our internode concurrent behavior logic via TLA+ and then ensuring our written code conforms to that specification.

                        As such, I do not agree that concurrent programming should be avoided whenever possible. I only argue that OOP encourages by default bad practices that one would want to use when programming in a concurrent style (mixing state+code in an abstraction that is ill suited for it). It doesn’t mean OOP is inherently bad, just a poor fit for the domain.

                        1. 1

                          I feel that each public/private boundary should have its own singular clock, and use this to sequence interactions within its encapsulated parts, but there can never really be a single global clock to a useful system, and most of our problems come from taking the illusion of said clock further than we should have.

                      3. 4

                        I would go exactly tangential and say that the best software treats concurrency as the basis of all computation. in particular, agnostic concurrency. if objects are modeled to have the right scope of visibility and influence, they should be able to handle messages in a perfectly concurrent and idempotent manner, regardless of cardinality.

                        1. 2

                          Take Clojure for example, and I think concurrency is not that bad, and there is no reason to avoid it. Mutability and intertwining of abstractions is what leads to problematic situations. Functional programming solves that by its nature.

                          1. 4

                            Even if the program is immutable, you ultimately want the program to have some effect on the outside world, and functional programming doesn’t magically fix the race conditions there. Consider having a bunch of immutable, unentwined workers all making HTTP requests the same server. Even if there are no data races, you can still exceed the rate limit due to concurrency.

                      4. 38

                        I can understand not liking how certain (many? most?) OOP projects are structured but the article seems to put OOP into a single category without any possible way to be properly structured (developers saying that making a good architecture requires skill is used as an argument against OOP).

                        But then goes ahead and provide a possible solution which just sounds quite contrived to begin with, without presenting the possibility that even such a solution can be abused and turned into a nightmare (seriously, who uses the humorous fizzbuzz enterprise as an actual argument against OOP?).

                        And, in addition, how can someone say something about X affecting performance and support just with “Nuff said”?

                        I could continue but this article is a total mess.

                        1. 4

                          The article might not be very well written, yet the points in “Cross-cutting concerns” “Object encapsulation is schizophrenic” “There are multiple ways to look at the same data” and speed are well known problems in OOP.

                          1. 0

                            Can you show me some actual benchmark on speed being a problem due to OOP in 2018?

                            I honestly couldn’t find anything that was not negligible.

                            I don’t agree with the feeling about object encapsulation, it’s like shitting all over pointers just because lots of people don’t understand them.

                            1. 2

                              It’s not a benchmark, but the high-performance game engines have moved away from “classical” OOP.

                              Example (one of my favourite talks of all time) https://www.youtube.com/watch?v=rX0ItVEVjHc

                              Nice blog post by the Handmade Hero guy (and a projection of Jonathan Blow’s ideas on programming too) https://caseymuratori.com/blog_0015

                              It’s not only because of speed, but also because of clarity, in my experience if you think in data instead of “beautiful abstractions” code tens to become simpler and nicer. That can be implemented using a mix of OOP and functional, but the core idea of this is “think in data, not in beautiful architectures”.

                        2. 9

                          Is this mostly against Java style OOP, Smalltalk style, Eiffel style, Argus style, Beta style, or JS style? Seems only the former. Remember that OOP has just as many interpretations and lineages as any other coding paradigm.

                          As it stands the article seems to be exclusively against Java with UML/RUP technique, versus say Eiffel with Syntropy.

                          1. 9

                            Hmm. No.

                            The point of classes is to protect the class invariant.

                            No class invariant, you have a Plain Old Struct and don’t need data hiding.

                            If you have a class invariant, there should be no way of breaking it using a public method.

                            If you can implement a function using existing public methods… it shouldn’t be a method, just a Plain Old Function.

                            1. 7

                              OOP is not that bad, is just that devs usually thinks they don’t need to think about the design, the problem is that humans are lazy and just want to do what’s easier, and if it compiles it’s right and you can take the next problem to solve, no, you need to do it better independent of the paradigm you are using.

                              Another idea that I would like to add is that we need to stop waiting for the perfect thing, any system will get harder to maintain when it gets big, what we can do is to build a robust infrastructure that supports healthy growth. Thinking like this I found that the capability of Lisp’s Macros are perfect for the matter, one can build its own DSL with it, what would be the infrastructure, a language on top of the language. This is why I started to use Clojure.

                              1. 9

                                I can agree with the frustrations of bad OOP code, but reading articles like this one is intensely frustrating. It would be much better if the author provided some actual code samples to illustrate problems instead of forcing us to parse this from his arguments. I would hope that he sees feedback on his posts over time and develops his style to match his obvious intellect and experience.

                                1. 3

                                  When using the datastore concept, one often ends up with a module for each of the “row kinds”. This can be a good pattern – often more performant and more convenient – but it is still OOP: the module encapsulates state and behavior.

                                  One advantage of the module-per-row-kind strategy is that it allows for a narrower API and in particular lets one define just a few bulk operations which are implemented in a performant way. Having an-object-per-row lets you perform any bulk operation – for item in arbitrary_selection: do something – but the affordance it provides is guaranteed to offer low performance, be unreliable, and hold locks for long periods of time, blocking out other users of the database.

                                  Many will say, that good OOP is not necessarily creating a Car class just because you are building a CRM for a car dealership – which is to say, naively mapping business objects to classes is not the only way to OOP. Bad OOP is still OOP; but good OOP is also still OOP. Is the datastore-oriented style something more than just a better OOP design?

                                  1. 3

                                    I don’t completely disagree with the main thought but there are a lot of completely incorrect assumptions in this article. For example that an OO design means that you get a performance hit. First of all, performance is relative. But I have also never really been in a situation where an object reference or an indirect call through several objects was actually the bottleneck of a program. OO code can be extremely performant.

                                    Also a lot of the pain points that the author talk about are not per se about OOP but about bad style, or bad implementations. I don’t know if OOP encourages those, but I have seen similar bad patterns or style in pretty much any kind of program. Whether functional, procedural or OO.

                                    1. 3

                                      The author doesn’t touch on patterns found in GTK or Elixir, where you tend to have data structures and every function takes in a primary data structure as their first argument. This is essentially what OOP does except the data and methods are kept together.

                                      One thing that a non-OOP system does, which the author doesn’t touch on at all, is it makes testing a lot easier. You can unit test so much more because you pass in the state of the data within the class! That is pretty powerful.

                                      Beyond that though, I don’t think OOP is inherently bad. There are a lot of over-engineered frameworks for sure, but that shouldn’t be a total rejection of OOP by any means.

                                      1. 1

                                        OOP also builds inheritance on top of that simple and unarguably reasonable idea. That’s where the pain points come in.

                                      2. 3

                                        Concepts like encapsulation and inheritance are actually very useful but they are not universally useful. OOP forces you to create certain abstractions, regardless of how well they fit actual problem. That is biggest mis-service to problem solving, because now you are creating a system (of classes, inheritance, polymorphic methods etc) in a such a way that one of its side effects is to solve the problem, instead of directly solving the problem. IMO, OOP is more about how your solution looks than how faithfully your solution addresses the problem. Following quote from the article itself sums this up nicely:

                                        “In my experience, the biggest problem with OOP is that encourages ignoring the data model architecture and applying a mindless pattern of storing everything in objects, promising some vague benefits.”

                                        1. 3

                                          Crossposting what I wrote on HN:

                                          I read this quickly to see if this was the piece that should convince me.

                                          It was not. It is, IMO, a collection of strawmen.

                                          People have abused OOP? Yes.

                                          But citing FizzBuzz Enterprise Edition (which is really funny even for us Java/.Net developers because it is so horribly wrong)

                                          or writing this

                                          Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.

                                          again IMO, demonstrate that the author never really understood OOP.

                                          What probably is true however is that a lot of people should unlearn the OOP they learned in school.

                                          1. 1

                                            I am usually against articles that are too bold and assertive. You can rarely pick a concept and say it’s just outright useless or bad. There are always circumstances and conditions that you have to consider. This is a classic example of such articles especially because of its lack of suitable alternatives. I don’t disagree with the points it has however, but shouldn’t be generalized and a grain of salt is needed. For those interested, I found this Quora question helpful. The part about parallelism in oop being a problem definitely resonated with me.

                                            1. 1

                                              Here’s a talk Mike Acton gave about this previously that is very high quality and involves less strawmen. Would encourage anyone interested in data-oriented design to watch it.