1. 39
  1.  

  2. 12

    He didn’t really answer the question though :(

    I think they’re CONSIDERED in opposition as a historical thing. While objects entered heavy use in the 80’s, the paradigm of “everything is an object” started with Java in the mid 90’s. Java rapidly became the most popular language, and functional languages started representing themselves as “not OOP”. Note that before the Java Era we had CLOS and Ocaml, both of which are functional languages with objects.

    1. 5

      You are right, he didn’t answer it! He answered “Is FP in opposition to OO”. I think your answer is pretty accurate. People confused C++ and Java as OOP (instead of recognizing them for what they were, Class Based Programming). And because these languages mutated state, FP is in opposition to them, and therefore OOP.

      I think more importantly, the pop culture has no idea what OOP is and therefore people are confused when they think FP is in opposition to OOP.

      1. 5

        I think more importantly, the pop culture has no idea what OOP is and therefore people are confused when they think FP is in opposition to OOP.

        I don’t think it’s fair to say that the “pop culture” doesn’t know what “OOP is”, because there really isn’t a definition of OOP. A lot of people equate it with Smalltalk, but you could also say OOP is Eiffel, or Ada, or Simula…

        1. 3

          People confused C++ and Java as OOP (instead of recognizing them for what they were, Class Based Programming).

          I don’t really think that classes are problem*. They were not just Class Based Programming, but imperative Class Based Programming inspired by C. If you look at Smalltalk (which is also Class Based), missing component is late binding, which allows you to do all kinds of neat stuff and cleaner style of programming (imho).

          *Although I really like Self, which is basically prototype based Smalltalk-like system.

          1. 3

            Unfortunately to most people class based programming and OOP are the same.

            1. 5

              I don’t know if “most” people do, but there is certainly a decent collection of people out there who think this. Consider this document (“Object-Oriented Programming in C”, revised December 2017), which starts out with this:

              Object-oriented programming (OOP) is not the use of a particular language or a tool. It is rather a way of design based on the three fundamental design meta-patterns:

              • Encapsulation – the ability to package data and functions together into classes
              • Inheritance – the ability to define new classes based on existing classes in order to obtain reuse and code organization
              • Polymorphism – the ability to substitute objects of matching interfaces for one another at run-time
              1. 1

                most people I have met while programming professionally in New Zealand.

                1. 1

                  Inheritance – the ability to define new classes based on existing classes in order to obtain reuse and code organization

                  I think this is universally accepted as an anti-pattern, both by OO programmers and FP.

                2. 2

                  I think how most C++, Java, and .NET programmers code supports your position. At least, how most code I’ve seen works looking at code from… everywhere. Whatever my sample bias is, it’s not dependent on any one location. The bad thinking clearly spread along with the language and tools themselves.

            2. 8

              Functional Programming and Object-Oriented Programming are dual, in a very precise way.

              Data types and constructing inhabitant terms is dual to interface types and observation methods. Functions are interfaces with a single method that takes an argument and returns an argument. Isomorphic data types can be understood as existence of particular functions that map A to B and back from B to A, preserving identities. Dual to functions are so-called differences: these are data that witness an output argument belonging to an input argument. A basic argument could show that two classes are behaviorally equivalent whenever there is no difference between them (the differences between A and B, and the differences between B and A, are empty).

              In Functional Programming, one is interested in the termination of a program. In Object-Oriented Programming, one is interested in the dual of termination: productivity of a process. Consider that the difficulty of preventing dead-locks is similar to the difficulty of ensuring termination.

              In terms of logic, many advancements in recent years have brought us: constructive and computational interpretations of classical logics; languages that allow the expression of focussing on both the output of a program, and the input of a process; polarization of types to model within one system both functional aspects and objective aspects; better understanding of paradoxical mathematical foundations lead to the theory of non-wellfounded relations, which brings us the dual of recursion/induction called corecursion/coinduction.

              In terms of abstract algebra, we now know that notions of quotient types of languages by certain equivalence/congruence relations is dual to notions of subtypes of abstract objects by certain bisimulations/bisimilarities of behaviors.

              In terms of philosophy, it is understood that rationality and irrationality can also be applied to software systems. “The network is unpredictable,” is just another way of saying that the computer system consists of an a priori irrational component. Elementary problems in Computing Science, such as the Halting Problem, witness the theoretical existence of irrational components. Those who assume every system is completely rational, or can always be considered rational, suffer from a “closed world syndrome.”

              1. 3

                From a layman’s perspective, I thnk it is dual in another way too. In terms of how mutability is handled. Functional way of organizing programs often strives to push the state out, with a set of pure functions inside, which are stitched together on some skeleton that handles state or IO. On the other hand, OO programming encapsulates the state, striving to provide as simple interface to the outside world as possible. i.e the mutability is hidden inside rather than pushed out.

              2. 4

                If you look at the success of the internet (beyond just the web) I think it’s safe to say OO, not FP, is the most scalable system building methodology. An important realization that Alan Kay emphasizes here is that OO and FP are not incompatible at all. A formal merging of FP and OO can be seen with the Actor Model by Carl Hewitt

                In other words, I think FP can supercharge OO and it seems the rock stable and fast systems built with Erlang and friends prove this out.

                1. 7

                  I think servers have scaled now based on solid messaging protocols that are not OOP in nature. And databases are still relational last i checked.

                  1. 6

                    Alan Kay would say that OOPs foundation is messaging protocols.

                    1. 2

                      precisely ! The whole internet is an objective oriented system. The smallest model of an object is a computer. so what is an object ? Its a computer that can receive and send messages. Systems like erlang have million little computers running on one physical computer for instance.

                      1. 4

                        that’s a real stretch. I might as well claim that REST’s success is entirely because it is really just functional programming as it passes the state along with the function and that it is pretty much just a monad.

                        Also, SQL is still king and no object-oriented database approach has supplanted it.

                    2. 4

                      They use the FSM model. Hardware investigations taught me they’d fit in Mealy and Moore models depending on what subset of protocol is being implemented or how one defines terms. Even most software implementations used FSM’s. Maybe all for legacy implementations given what I’ve seen but there could be exceptions.

                      And, addressing zaphar’s claim, their foundation or at least abstracted form may best be done with Abstract, State Machines described here. Papers on it argue it’s more powerful than Turing model since it operates on mathematical structures instead of strings. Spielmann claims Turing Machines are a subset of ASM’s. So, the Internet was built on FSM model which, if we must pick a foundation, matches the ASM model best even though the protocols and FSM’s themselves predate the model. If a tie-breaker is needed for foundations, ASM’s are also one of most successful ways for non-mathematicians to specify software in terms of ease of use and applicability.

                      1. 3

                        You just made the engineer inside me happy :) FSM are the first thing we learned in engineering school but too often software is just hacked together based on code and not design. FSM form the basis of any protocol/service. eg: TCP, FTP, TLS, SSH, DNS, HTTP, etc.

                        1. 3

                          The cool thing is those can be implemented and verified at the type level in dependently typed functional languages. See Idris’ ST type. Session types are another example. Thankfully I can see movements in the FSM direction on the front end with stuff like Redux and Elm, but alas it will be a while before these can be checked in the type system.

                    3. 4

                      I don’t think the internet is a good reference model. IMO the internet is largely a collection of “whatever we had at the time” with a sprinkle of “this should work” and huge amounts of duct-tape on top. The internet succeeded despite being build on OO, not because of it. Though I think FP would also have made the internet succeeded in spite of it, not because of it.

                      There is no one true methodology, I think it’s best if you mix the two approaches where it makes sense to get the best of both worlds.

                      1. 1

                        Let me be more specific , by internet i mean TCP/IP and friends , not HTTP and friends.

                        1. 2

                          Even TCP/IP and friends is a lot of hacks and “//TODO this is a horrible hack but we’ll fix it later”. HTTP is just the brown-colored cream on top of the pie that is the modern internet.

                          It’s why DNSSEC and IPv6 have seen such little adoption, all the middleboxes someone hacked together once are all still up and running with terrible code and they have to be fully replace to not break either protocol.

                          I’ve seen enough routers that silently malform TCP packets or (more fun) recalculate the checksum without checking it, making data corruption a daily occurence. Specs aren’t followed, they’re abused.

                          1. 2

                            And yet the internet has never shut down since it started running with all its atoms replaced many times over. Billions of devices are connected and the whole system manages to span the entire planet. It just works.

                            It’s an obviously brilliant and successful design that created tens of trillions of dollars in value. I think you will be hard pressed to find another technology that was this successful and that changed the world to the degree the internet has.

                            Does it have flaws like the ones outlined? Yes of course. Does it work despite them? Yes!

                            The brilliance of the internet is that even when specs are not followed, the system keeps on working.

                            1. 2

                              I think it’s more in spite of how it was built and not because of it.

                              And the internet has shut down several times by now, or atleast large parts of it (just google “BGP outage” or “global internet outage”)

                              It’s not a brilliant design but successful, yes. It’s probably just good enough to succeed.

                              Not brilliant, it merely works by accident and the accumulated ducttape keeps it going despite some hiccups along the way.

                              If the internet was truly brilliant it would use a fully meshed overlay network and not rely on protocols like BGP for routing. It would also not have to package everything in Ethernet frames (which are largely useless and could be replace with more efficient protocols)