1. 26
  1.  

  2. 14

    I believe that OO affords building applications of anthropomorphic, polymorphic, loosely-coupled, role-playing, factory-created objects which communicate by sending messages.

    It seems to me that we should just stop trying to model data structures and algorithms as real-world things. Like hammering a square peg into a round hole.

    1. 3

      Why does it seem that way to you?

      1. 5

        Most professional code bases I’ve come across are objects all the way down. I blame universities for teaching OO as the one true way. C# and java code bases are naturally the worst offenders.

        1. 5

          I mostly agree, but feel part of the trouble is that we have to work against language, to fight past the baggage inherent in the word “object”. Even Alan Kay regrets having chosen “object” and wishes he could have emphasized “messaging” instead. The phrase object-oriented leads people to first, as you point out, model physical things, as that is a natural linguistic analog to “object”.

          In my undergraduate days, I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

          I suppose I have taken that “Aha!” moment for granted and can see how, in the absence of such an explicit lesson, it might be hard to discover the notion on your own. It is definitely a problem if OO concepts are presented universally good or without pitfalls.

          1. 4

            I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

            Can you remember some of the specifics of this? Sounds fascinating.

            1. 3

              My memory is a bit fuzzy on it, but the project was about simulating a bank. Your bank program would be initialized with N walk-in windows, M drive-through windows and T tellers working that day. There might’ve been a second type of employee? The bank would be subjected to a stream of customers wanting to do some heterogeneous varieties of transactions, taking differing amounts of time.

              There did not need to be a teller at the drive-through window at all times if there was not a customer there, and there was some precedence rules about if a customer was at the drive-through and no teller was at the window, the next available teller had to go there.

              The goal was to produce a correct order of customers served, and order of transactions made, across a day.

              The neat part (pedagogically speaking) was the project description/spec. It went through so much effort to slowly describe and model the situation for you, full of distracting details (though very real-world ones), that it all-but-asked you to subclass things needlessly, much to your detriment. Are the multiple types of employees complete separate classes, or both sublcasses of an Employee? Should Customer and Employee both be subclasses of a Person class? After all, they share the properties of having a name to output later. What about DriveThroughWindow vs WalkInWindow? They share some behaviors, but aren’t quite the same.

              Most people here would realize those are the wrong questions to be ask. Even for a new programmer, the true challenge was gaining your first understandings of concurrency and following a spec rules for resource allocation. But said new programmer had just gone through a week or two on interfaces, inheritance and composition, and oh look, now there’s this project spec begging you to use them!

          2. 2

            Java and C# are the worst offenders and, for the most part, are not object-oriented in the way you would infer that concept from, for example, the Xerox or ParcPlace use of the term. They are C in which you can call your C functions “methods”.

            1. 4

              At some point you have to just let go and accept the fact that the term has evolved into something different from the way it was originally intended. Language changes with time, and even Kay himself has said “message-oriented” is a better word for what he meant.

              1. 2

                Yeah, I’ve seen that argument used over the years. I might as well call it the no true Scotsman argument. Yes, they are multi-paradigm languages and I think that’s what made them more useful (my whole argument was that OOP isn’t for everything). Funnily enough, I’ve seen a lot of modern c# and java that decided message passing is the only way to do things and that multi-thread/process/service is the way to go for even simple problems.

                1. 4

                  The opposite of No True Scotsman is Humpty-Dumptyism, you can always find a logical fallacy to discount an argument you want to ignore :)

          3. 2
            Square peg;  
            Round hole;  
            Hammer hammer;  
            hammer.Hit(peg, hole);
            
            1. 4

              A common mistake.

              In object-orientation, an object knows how to do things itself. A peg knows how to be hit, i.e. peg.hit(…). In your example, your setting up your hammer, to be constantly changed and modified as it needs to be extended to handle different ways to hit new and different things. In other words, your breaking encapsulation by requiring your hammer to know about other objects internals.

            2. 2

              your use of a real world simile is hopefully intentionally funny. :)

              1. 2

                That sounds great, as an AbstractSingletonProxyFactoryBean is not a real-world thing, though if I can come up with a powerful and useful metaphor, like the “button” metaphor in UIs, then it may still be valuable to model the code-only abstraction on its metaphorical partner.

                We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                1. 2

                  Factory

                  A factory is a real world thing. The rest of that nonsense is just abstraction disease which is either used to work around language expressiveness problems or people adding an abstraction for the sake of making patterns.

                  We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                  I think OOP has its place in the world, but it is not for every (majority?) of problems.

                  1. 3

                    A factory in this context is a metaphor, not a real world thing. I haven’t actually represented a real factory in my code.

                    1. 2

                      I know of one computer in a museum that if you boot it up, it complains about “Critical Error: Factory missing”.

                      (It’s a control computer for a factory, it’s still working, and I found that someone modeled that case and show an appropriate error the most charming thing)

                      1. 2

                        But they didn’t handle the “I’m in a museum” case. Amateurs.

                2. 1

                  You need to write say a new air traffic control system, or a complex hotel reservation system, using just the concepts of data structures and algorithms? Are you serious?

                3. 8

                  The biggest issue with OO in my experience is the fact that objects are opaque state machines. The more objects you have in your program the more independent states you have to keep in your head when reasoning about it. Since objects can be referenced in many places, and their states can be interdependent the complexity of the system grows really fast.

                  I find that it’s pretty much impossible to do any local reasoning in OO projects. This becomes problematic when you’re working on large systems where you’re not familiar with all the parts of the code, and you never know if a particular change you’re making might affect some other code you don’t know via side effects in an unexpected way.

                  1. 6

                    As a former graphic/UX designer I like the affordance angle to programming language and library design. Personally I’m leaning more and more towards ML-style languages or even dependently typed languages than OO languages these days because I find these languages afford me a better ability to design my own domain-specific affordances in the type systems themselves. Alas, they’re not a silver bullet at the moment - there is still work to be done in making those affordances clearer to the user. Sometimes an intricate type signature or datatype can say so much that it is overwhelming, or some incorrectly placed term can result in the type checker exploding a weird error. So at the moment library designers are forced to strike a balance - to a middle ground, trading simple API surface areas with the chance of encountering some runtime errors. We’re making progress though, and that makes me excited!

                    1. 6

                      I suspect a lot of the problem is the way it was hyped and taught.

                      The notion that real world objects map to program objects is rubbished by the Liskov Substitution Principle.

                      A far better approach is to say program “Objects” are utterly unrelated to our real world intuition of Objects.

                      An Object instance is merely a binding of a set of names, to a set of values, for which a boolean expression (which we call the class invariant), always holds true.

                      This rids us of the urge to do “too much work in the constructor”, after all, it’s just binding names to values.

                      So why would we even want such a concept as an Object?

                      Because we want our functions to be correct.

                      What does correct mean? It means if a given precondition holds, we promise to fulfill a postcondition.

                      So if you have an object of a certain type, you guaranteed the invariant holds. If that invariant implies the precondition is met, then static type checking guarantees you will only invoke this function with values for which the precondition holds.

                      This submission should be upvoted a lot more…

                      …not because I think we should all go off and do program proving.

                      But because if you don’t understand it, you don’t understand what you are doing when you program.

                      1. 1

                        You can have objects that model real-world things and design by contract or formal specification (there’s a book called Object Orientation in Z that goes through this process for a collection of different OO extensions to Z, and of course Object-Oriented Software Construction shows how to do it and explicitly builds on the work of Liskov). With that in mind, I don’t believe that the LSP says that you can’t model real world things as objects.

                        1. 2

                          The notion that you can take what a non-programmer calls an object and a inheritance and map it naively onto programming objects is hopeless, it’s misleading.

                          Certainly you can model real world objects… and many other real world things common people wouldn’t call objects, and an infinite of non-real world things not remotely obectlike, all modeled by programming objects.

                          The naming is confusing and misleading and, from a teaching point of view, was a mistake. It produced a generation of badly structured code with just plain wrong inheritance trees.

                          Your code suddenly improves dramatically when you throw away mental crutch of programming objects being anything like real world objects.

                          Programming is a profoundly mathematical activity, but sadly the harsh mathematical realities of it tend to blunted by “testing it into a shape of the product required”.

                          Those harsh mathematical realities re-emerge sharply when we attempt to re-use code.

                          Then suddenly we whinge “Code re-use is hard” instead of seeing the mathematics that governs everything we do.

                      2. 5

                        This is so good. An object-oriented approach breaks down when you can’t see how to design the objects you need, so just put procedural code into the objects you have.

                        I predict similar “failures” in functional programming codebases as that paradigm continues its journey to the mainstream. In my uikonf talk I called this the “imperative trapdoor”: you can always hide imperative code behind a veneer of objects or functions, but then you lose the paradigm benefits of objects or functions.

                        1. 2

                          Thanks, this is a good observation!

                          1. 2

                            Pure FP with explicit effects can help push you towards a better way, but it’s always possible to end up with an equally enterprisey mess of monad transformers and lenses… hoping effect systems can alleviate some of the former problems. But yes, poor program design will interact poorly with any paradigm.

                          2. 3

                            Great stuff. Re the conditionals, I look at it this way: An object is a choice.

                            When you have sub-type polymorphism, you can make a choice in one part of a program and create an object than embodies it. When the object is bound, the choice is made and the conditional can disappear from the contexts where the object is used.

                            1. 1

                              I agree and use that model, and it makes me think “I answered this question back when I constructed this object”. That’s something I can certainly see as a source of confusion.

                            2. 2

                              Abstraction and modularity.
                              Admittedly this can come at a cost: too much code/too many layers, intertwined dependencies, diamond dependencies, obfuscation.
                              As with all methodologies, use in moderation, or at least know the alternatives. Even within OO there are variations such as “data only struct/classes logic goes in other functions”, template functions, strategy pattern etc. which may offer cleaner or at least less complicated solutions. If the alternatives aren’t popular in your language try out some other languages with different paradigms first, which may inspire you when you come back to solve your problem.