1. 54
    1. 6

      I mostly agree but I would say that OO is evolving toward FP and they aren’t that far apart (at least in certain circles, maybe not in 20 year old code).

      Good OO is also principled about state and I/O, just like FP.

      Related comment I wrote on “How to Avoid the Assignment Statement”

      https://news.ycombinator.com/item?id=22835750

      Older comment on “Disadvantages of Purely Functional Programming”

      https://news.ycombinator.com/item?id=11841893

      So while I agree with the general gist, I would add the caveat that it’s easier to do FP in OO languages than OO in functional language. And the former is pretty natural.


      Another way I think about it is “dependency inversion of state and I/O”, which I wrote about here:

      http://www.oilshell.org/blog/2020/04/release-0.8.pre4.html#dependency-inversion-leads-to-pure-interpreters

      And I also think in many programs it’s useful to “cheat” a little to get it working, and then refactor to a more principled architecture.

      1. 9

        There’s a fundamental difference between OO and FP. With FP the core idea is to keep data and code separate. Programs are written as pipelines of functions that you pass data through, and get another piece of data at the other end. In modern FP, data is also immutable, so the functions are referentially transparent. The big advantage here is that data is inert by nature because it doesn’t have any behaviors associated with it, and it’s transparent. What you see is what you get because it’s just data.

        And you operate on this data by combining a common set of functions from the standard library. Once you learn how these functions work, it becomes trivial to transform any kind of data using them. It’s akin to having a bunch of Lego pieces that you can put together in many different ways to accomplish different tasks.

        On the other hand, objects are state machines that have some internal state with the methods as the API for interacting with that state. An application written in OO style ends up being a graph of opaque interdependent state machines, and this tends to be quite difficult to reason about. This is why debugging is such a big part of OO development. It’s impossible to reason about a large OO application because there’s just too many things that you have to keep in your head. So, the only thing you can do is put a break point, get the application in the desired state, and try to figure out how you got there.

        Meanwhile, classes are really ad hoc DSLs, each class defines its own custom API and behaviors in form of its methods, and knowing how one class works tells you absolutely nothing about the next class. The more classes you have in your program, the more stuff you have to keep in your head to work with it effectively.

        This also makes it much more difficult to learn APIs for libraries. When you call a method, and you get an object graph back, now you have to learn about each of these objects, and how they behave. When your API is data driven, this problem doesn’t exist. You call a function, get some data back, and that’s the end of the story.

        Rich Hickey describes this problem here very well, and that matches my experience very closely.

        1. 6

          No, that is sort of an old way of thinking about OO. There are a lot of people who don’t write it that way, including me. You can do something much closer to FP in a OO language.

          That is the point of the linked comments. Exceprt:

          I use “functions and data” (a la Rich Hickey).

          Except my functions and data are both CLASSES. Data objects are basically structs, except they can do things like print themselves and answer simple queries based on their values, which helps readability (e.g. 1 liners, like word.AsFuncName() ).

          Function objects are simply classes with configuration passed to constructors. That usually have a single method, but multiple methods are also often useful. Calling this method is basically equivalent to calling a curried function. But this is supremely useful for both compilers and servers, because often you have config/params that is constant once you reach main(), and then you have params that vary per request or per file processed. Many functions depend on both, and it’s cleaner to separate the two kinds of params.

          So both “functions and data” are effectively and usefully implemented as classes.

          The Oil interpreter started out maybe 60% in this style, and is approaching 90% of that style. It’s tens of thousands of lines of code, so it’s not a small change.

          There are a few classes that are state machines, but they are explicitly limited to about 10% of the interpreter, just as you would do in a functional language. Most of it is parsing, which has “local” mutable state inside and an interface that’s basically a function.

          Again, from the comments, the thing I found funny is that for lexing and parsing, languages like OCaml just borrow the exact same mutable algorithms from C for lexing and parsing (LALR parsing and DFAs for regexes). The mutable escape hatch of of OCaml is essential.

          Lexing and parsing are inherently stateful. As long as it’s localized, it’s no problem. FP and OO both agree on that.

          1. 1

            Thing is that in practice you rarely pass functions around in Clojure. Vast majority of the time you’re passing around plain data, you pass that data through different functions to transform it, and get new kind of data back. There is very little reason to pass functions around in my experience. So, yes you can do similar stuff to OO with functions and closures, but that’s not generally how you end up structuring your applications.

            And yes, you can write FP style code in OO languages, but then you’re really not making the most out of the paradigm the language was designed for. You’re much better off doing FP in an actual FP language.

        2. 3

          This also makes it much more difficult to learn APIs for libraries. When you call a method, and you get an object graph back, now you have to learn about each of these objects, and how they behave. When your API is data driven, this problem doesn’t exist. You call a function, get some data back, and that’s the end of the story.

          It creates another problem though: exposing implementation details. Your clients may start assuming things about the data structure that need to be changed. This article tells the story of an API that exposed a list [1]. It turned out the list was too slow, and they wanted to change it. Unfortunately the API’s clients assumed it was a list. The solution given in the article? Hide the data behind a constructor and selectors. That’s basically a class definition.

          1: https://itnext.io/information-hiding-for-the-functional-programmer-b56937bdb789

          1. 2

            Not really because I choose what data I return from the API functions in a library. Meanwhile, your anecdote could’ve happened just as easily using OO API. In fact, I would argue that it’s a far more common problem in OO since you return a graph of objects, and if you ever need to change any of them down the line you’ll be breaking the API for all your users.

            Having been working with Clojure for around a decade now, I can tell you that this problem has yet to come up for me in practice.

            1. 1

              In fact, I would argue that it’s a far more common problem in OO since you return a graph of objects, and if you ever need to change any of them down the line you’ll be breaking the API for all your users.

              One of the core tenets of OOP is to program to the interface, not the implementation. If you change the implementation but keep the interface unchanged, you are guaranteed to not break the downstream consumers.

              1. 1

                Interfaces only address the problem partially, because if the interface ever changes that will still create a breaking change. My experience is that changes to interfaces in OO happen far more regularly than changes to the shape of the data in functional APIs.

                1. 1

                  As per the Java Language Spec,[1]

                  …here is a list of some important binary compatible changes that the Java programming language supports: … Adding new fields, methods, or constructors to an existing class or interface.

                  (Among others)

                  This is similar to making an additive change to the shape of data, e.g. adding a new field to a map which is consumed by a function that doesn’t use the field.

                  [1] https://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html

                  1. 1

                    Having worked with Java for around a decade, I’m well aware of how interfaces work. Yet, my experience is that breaking changes in a language like Java happen far more frequently than they do in a language like Clojure. So, while there are mitigating factors in theory, I don’t find they translate to having stable APIs in practice. That’s just my experience though.

            2. 1

              How does you choosing what data you return from the API prevent you from exposing implementation details? Or are you saying you just don’t care because you can change the API interface whenever you feel like it?

              1. 1

                I don’t really follow your question. When I write a function, I explicitly what data it returns, and the shape of that data. The shape of the data is explicitly part of the API contract.

                1. 1

                  The shape of the data is explicitly part of the API contract.

                  Because the approach forces you to do that even when the shape of the data is an implementation detail that you don’t want to expose.

                  1. 1

                    That statement doesn’t make sense. The shape of the data is explicitly part of the contract provided by the API. It is not an implementation detail.

                    1. 1

                      Every implementation choice that is not relevant to the API’s core functionality is an implementation detail. For example, it’s not relevant for an iterator to know the data structure of a collection. All the iterator cares about is that it can iterate over the elements. The concrete shape of the collection (list, set, map, queue, stack, whatever) is an implementation detail from the point of view of the iterator.

                      Just because you decide to make an implementation detail a part of the API’s contract, that doesn’t mean it’s not an implementation detail anymore. That’s essentially the problem in the article that I linked.

                      1. 1

                        In Clojure, you have a seq interface, so any function can iterate over any collection. The same way as it works with Java interfaces. So, no you’re not leaking any more implementation details than you wold with OO API. You’re just arguing a straw man here.

                        1. 1

                          Just because iteration in Clojure does not depend on the implementation details of the data structure, that doesn’t mean the API’s clients can’t write code that does depend on such details. So, that does not contradict my point at all.

                          I don’t know Clojure that well, so I can’t give you an example in Clojure. However, I’m pretty confident that even Clojure does not magically solve the problem of leaking implementation details (otherwise, why would it have interfaces in the first place).

                          1. 1

                            Again, there’s no difference between FP and OO here. Both Clojure and Java have collection interfaces. If you have an OO API that says it returns a collection, that collection is backed by a concrete implementation, such as a list, internally. That’s an implementation detail. OO doesn’t magically do away with collections. And this is why I find your whole line of argument completely absurd.

                            Also, having actually used Clojure for a around a decade now professionally, I can tell you that what you’re describing has never ever come up in practice. So, forgive me if I’m not convinced by your arm chair analysis here.

                            1. 1

                              Again, there’s no difference between FP and OO here. If you have an OO API that says it returns a collection, and that collection is backed by a list internally, that’s an implementation detail. OO doesn’t magically do away with collections.

                              I don’t follow this argument at all. Yeah, in the OO case there is an implementation detail. However, as you pointed out with the word internally: that implementation detail is not exposed to the client. Meaning, the API owner can change that list to a map or a tree or any other data structure at any time without having to worry about the client’s code at all.

                              It’s also a little bit of a straw man because this discussion was not about OO vs FP. It was about your statement that APIs should produce/consume raw data rather than objects.

                              Having actually used Clojure for a around a decade now professionally, I can tell you that what you’re describing has never ever come up in practice. So, forgive me if I’m not convinced by your arm chair analysis here.

                              So, you’re telling me that, in a whole decade, you’ve never had to make changes to an API because a data structure turned out to be wrong? Given that I could pretty easily find an article that showed people having exactly that problem, forgive me if I conclude that either you’re not remembering correctly, or you’re an extraordinary programmer.

                              1. 1

                                The implementation detail is exposed to the client in exactly the same way. In both cases you have an interface that abstracts the type of collection used from the client. The API owner can change the implementation in EXACTLY the same way. If my API returned a list and then I change it to return a vector, the client does not need to make any changes because both of them support the seq interface.

                                The discussion is about OO vs FP because my whole point is that in OO you end up interacting with object graphs in your API, while in FP you end up interacting with plain data. I’m simply telling you that your argument is incorrect in the context of Clojure because collections conform to interfaces. I’m starting to get the feeling that you don’t understand how interfaces work.

                                So, you’re telling me that, in a whole decade, you’ve never had to make changes to an API because a data structure turned out to be wrong?

                                Correct, I’ve never had to make a change to an API because I changed the implementation of the backing data structure that conformed to the same interface.

                                And please do link me an article of this happening in Clojure since you claim you claim that can easily find that. Perhaps what you failed to consider is that the article you found deals with the problems in a specific language as opposed to a general FP problem that you extrapolated from it.

                                It frankly amazes me that somebody who openly admits to knowing nothing about the subject can have such strong opinions on it based on an article they find.

                                1. 1

                                  The implementation detail is exposed to the client in exactly the same way. In both cases you have an interface that abstracts the type of collection used from the client. The API owner can change the implementation in EXACTLY the same way. If my API returned a list and then I change it to return a vector, the client does not need to make any changes because both of them support the seq interface.

                                  As I said, only because Clojure happens to have a seq interface. This reply tells me that you didn’t really get the point I made, and I don’t know how to explain it to you at this point. It seems you just don’t want to see it.

                                  Correct, I’ve never had to make a change to an API because I changed the implementation of the backing data structure that conformed to the same interface.

                                  Then your API is not producing/consuming raw data, it’s producing/consuming interfaces. It seems you’re contradicting your own position.

                                  And please do link me an article of this happening in Clojure since you claim you claim that can easily find that. Perhaps what you failed to consider is that the article you found deals with the problems in a specific language as opposed to a general FP problem that you extrapolated from it.

                                  So, I searched “Clojure information hiding”, and the first hit literally repeats my whole point. Maybe you understand this explanation then:

                                  One of the goals of information-hiding is to hide implementation details, so that programmers cannot write programs that depend on those details. (If they do so and those implementation details change, then those programs must also be changed, driving up software maintenance costs.) Clojure thus does not give us any good way to hide implementation details.

                                  https://cs.calvin.edu/courses/cs/214/adams/labs/08/clojure/

                                  1. 1

                                    Clojure having a seq interface clearly disproves your point. I’m not sure what else there is to say here.

                                    Then your API is not producing/consuming raw data, it’s producing/consuming interfaces. It seems you’re contradicting your own position.

                                    If you think that then you didn’t understand my position. My original point was that data is immutable, transparent, and it doesn’t have behaviors associated with it. Having interfaces doesn’t invalidate any of that. Your whole argument is just a straw man.

                                    So, I searched “Clojure information hiding”, and the first hit literally repeats my whole point. Maybe you understand this explanation then:

                                    There is no information hiding in Clojure because data is part of the API. That’s my whole point of the difference between OO and FP. The claim that this is at odds with hiding implementation details is incorrect however, and I’ve already explained repeatedly why it’s incorrect.

                                    Seeing how this discussion just keeps going in circles I’m going to leave it here. You can feel free to continue believing what it is that you want to believe, I’ve explained all I can.

                                    Have a good day.

                                    1. 1

                                      My original point was that data is immutable, transparent, and it doesn’t have behaviors associated with it. Having interfaces doesn’t invalidate any of that.

                                      Okay, that changes things…

                                      I guess I didn’t understand that at all from reading your comments. There is a paragraph in your initial post where you talk about “classes being ad hoc DSLs” and a bunch of other stuff that applies equally to interfaces. You know, in C++ you make interfaces as pure virtual classes. Then, in the next paragraph you make a contrast between that situation and just data. So, I thought you were talking about concrete data structures not hidden behind any interface.

                                      Also, in your first counterargument, you’re countering with this graph of objects that’s hard to change. However, if that was a graph of interfaces, that problem would still be there (as you pointed out in another comment). So, this reinforced the understanding in me (and also others, it seems) that you were talking about concrete “interface-less” data structures.

                                      Anyway, if what you’re arguing for includes the ability to return only interfaces from APIs, then I agree the problem that I brought up doesn’t really apply.

                                      1. 1

                                        The problem with graphs of objects is primarily with objects being opaque and having behaviors. This is completely tangential to the discussion around interfaces.

                                        When you have a data driven API backed by immutable data, then what you see is what you get. You can see the the data returned in the API, and it doesn’t have any behaviors associated with it. All the interfaces achieve here is abstracting over the concrete implementation, so you don’t have to worry about the specific backing data structure affecting the API semantics.

                                        1. 1

                                          Alright, that’s great! I actually share the view that state can turn into a nasty thing. It’s just that you seemed to be arguing against the idea of data abstraction. This kind of triggered me because my experience tells me the code I work with professionally would never survive without it.

                                          I guess it’s words like transparent and opaque that cause some confusion for me. They are very generic words so they can be misinterpreted easily. For example, an interface is also opaque in the sense that you can’t see the implementation details.

                                          1. 2

                                            Glad we’re on the same page. Unfortunately, this kind of thing happens quite often. We all use the same words, but we attach subtly different meanings to them in our heads based on our existing knowledge and experience. So, it’s really easy to end up talking past each other and not even realize it. :)

        3. 2

          I basically agree with everything you said about what makes a program good or bad, but I disagree with your conclusion: that functional programming leads to good programs and oop leads to bad programs (in some general sense, let’s not nit-pick or talk about absolutes).

          But I disagree. In almost all languages, you are perfectly allowed to mutate inputs into functions. This includes basically all (quasi-popular) functional languages that are not named Haskell or Clojure. You are also allowed to cause side effects in your functions in all functional languages that are not named Haskell. This means that I can write input-mutating, side-effecting functional code in OCaml, LISP, Scheme, Rust, etc, etc. Some languages discourage it more than others.

          My point is that I agree that making 100 opaque state machines to interact with is a bad idea for a program. But that ad-hoc, crappy, DSL is also perfectly easy to write in a functional way.

          I have little doubt that a very strict OOP language that does not allow input arg mutation and side-effects in methods is possible to create and would probably work just as well as good functional languages. The only difference would be a coupling of the data to the functions. That is the only actual difference between FP and OOP in any strict sense, IMO.

          1. 5

            Both major ML dialects (Standard ML and OCaml) keep a typed distinction between immutable and mutable data. I find this to be good enough to tell which operations can mutate data that I care about at any given moment. Moreover, modules allow you to easily implement ostensibly immutable abstract data types that internally use mutation. (The operating word here is “easily”. It is possible in Haskell too, but it is a lot more painful.)

            I would not call Rust a “functional language”, but, for similar reasons, its ability to track what data can be safely mutated at any given moment is good enough to get most of the advantages of functional programming. And then, some of the advantages of non-functional programming.

      2. 3

        Hopefully on-topic: My experience in functional programming has led to heavy use of composition.

        One thing that’s always frustrated me about Python is that instance methods do not return “self” by default, but instead return “None”. I once hacked up a metaclass to make that happen, and suddenly Python felt much more functional! SmallTalk and some of the heavy OO languages do return “self” or “this” by default, I find that fits the Haskell part of my brain better.

        What’s the zen koan? Objects are a poor man’s closures, and closures are a poor man’s objects? In Haskell I use partial application to give me roughly what object instances give me. It’s neat, try it out!

        1. 4

          One thing that’s always frustrated me about Python is that instance methods do not return “self” by default, but instead return “None”.

          Yes! It makes it much harder to do simple (list|dict|generator) comprehensions when mutations return None.

          In Haskell I use partial application to give me roughly what object instances give me. It’s neat, try it out!

          I (ab)use functools.partial for this in python. Very helpful when you have a defined interface and you want to stick extra parameters in as well.

      3. 3

        Even principled object oriented code hides race conditions. If you connect two systems in an OO language, it may produce an incorrect result while separately systems would run just fine.

        Another problem is that partial evaluation for an OO language is not obvious. If you intend to write abstract code like people can do with FP it introduces structure that may need to be contracted away.

        1. 2

          You don’t get a guarantee from the language, but you can absolutely structure your OO code so composition is thread-safe, and unsafe combinations are obvious.

          When I say dependency inversion of I/O and state, I mean that they are all instantiated in main(). And all other code “modules” are also instantiated in main (as classes), and receive state and I/O as parameters.

          If you pass the same state to two different modules, then you have to be careful that they are not run concurrently.

          If they don’t accept the same state as parameters, then they are safe to run concurrently.

          There are zero mutable globals. That is how the Oil interpreter is written.

          It helps as I mentioned to have some classes that are like functions, and some classes that are like data. Data and functions are both usefully and easily expressed as classes. (In Oil I use ASDL to make data-like classes, for example)


          tl;dr You can make state and I/O parameters in an OO language, and then you get a lot of the reasoning benefits of functional programs, along with some other flexibility (like using a mutable style inside referentially transparent functions, mentioned in my comments and in the article)

        2. 1

          Could you expand on your first point? What kind of systems and connection between them do you have in mind?

          1. 3

            An example, a simple one:

            class A {
              int x;
              mutate_x (int v) { x = v };
              sendto (B receiver) {
                int y = x + 10;
                while (x < y) { receiver.receive(x); x += 1 }
              }
            }
            

            Depending on whether a receiver here has a separate access to A and gets to mutate_x when sendto is going on, this code is either fine, or then it’s not.

            1. 1

              That makes sense. Thanks for elaborating!

      4. -1

        Can you please paste you replies here, so I don’t have to make another click?

    2. 8

      Isn’t there a difference between functional code and side-effect-free code? I feel like, by trying to set up all of the definitions just right, this article actually misses the point somewhat. I am not even sure which language the author is thinking of; Scheme doesn’t have any of the three mentioned properties of immutability, referential transparency, or static type systems, and neither do Python nor Haskell qualify. Scouring the author’s websites, I found some fragments of Java; neither Java nor Clojure have all three properties. Ironically, Java comes closest, since Java is statically typed in a useful practical way which has implications for soundness.

      These sorts of attempts to define “functional programming” or “functional code” always fall flat because they are trying to reverse-engineer a particular reverence for some specific language, usually an ML or a Lisp, onto some sort of universal principles for high-quality code. The idea is that, surely, nobody can write bad code in such a great language. Of course, though, bad code is possible in every language. Indeed, almost all programs are bad, for almost any definition of badness which follows Sturgeon’s Law.

      There is an important idea lurking here, though. Readability is connected to the ability to audit code and determine what it cannot do. We might desire a sort of honesty in our code, where the code cannot easily hide effects but must declare them explicitly. Since one cannot have a decidable, sound, and complete type system for Turing-complete languages, one cannot actually put every interesting property into the type system. (This is yet another version of Rice’s theorem.) Putting these two ideas together, we might conclude that while types are helpful to readability, they cannot be the entire answer of how to determine which effects a particular segment of code might have.

      Edit: Inserted the single word “qualify” to the first paragraph. On rereading, it was unacceptably ambiguous before, and led to at least two comments in clarification.

      1. 7

        Just confirming what you said: Did you say that Haskell doesn’t have immutability, referential transparency, or a static type system?

        1. 3

          I will clarify the point, since it might not be obvious to folks who don’t know Haskell well. The original author claims that two of the three properties of immutability, referential transparency, and “typing” are required to experience the “good stuff” of functional programming. On that third property, the author hints that they are thinking of inferred static type systems equipped with some sort of proof of soundness and correctness.

          Haskell is referentially transparent, but has mutable values and an unsound type system. That is only one of three, and so Haskell is disqualified.

          Mutable values are provided in not just IO, but also in ST and STM. On one hand, I will readily admit that the Haskell Report does not mandate Data.IORef.IORef, and that only GHC has ST and STM; but on the other hand, GHC, JHC, and UHC, with UHC reusing some of GHC’s code. Even if one were restricted to the Report, one could use basic filesystem tools to create a mutable reference store using the filesystem’s innate mutability. In either case, we will get true in-place mutation of values.

          Similarly, Haskell is well-known to be unsound. The Report itself has a section describing how to do this. To demonstrate two of my favorite examples:

          GHCi, version 8.6.3: http://www.haskell.org/ghc/  :? for help
          Prelude> let safeCoerce = undefined :: a -> b
          Prelude> :t safeCoerce
          safeCoerce :: a -> b
          Prelude> data Void
          Prelude> let safeVoid = undefined :: Void
          Prelude> :t safeVoid
          safeVoid :: Void
          

          Even if undefined were not in the Report, we can still build a witness:

          Prelude> let saferCoerce x = saferCoerce x
          Prelude> :t saferCoerce
          saferCoerce :: t1 -> t2
          

          I believe that this interpretation of the author’s point is in line with your cousin comment about type signatures describing the behavior of functions.

          1. 4

            I don’t really like Haskell, but it is abusive to compare the ability to write a non-terminating function with the ability to reinterpret an existing object as if it had a completely different type. A general-purpose programming language is not a logic, and the ability to express general recursion is not a downside.

          2. 3

            A “mutable value” would mean that a referenced value would change. That’s not the case for a value in IO. While names can be shadowed, if some other part of the code has a reference to the previous name, that value does not change.

            1. 1

              Consider the following snippet:

              GHCi, version 8.6.3: http://www.haskell.org/ghc/  :? for help
              Prelude> :m + Data.IORef
              Prelude Data.IORef> do { r <- newIORef "test"; t1 <- readIORef r; writeIORef r "another string"; t2 <- readIORef r; return (t1, t2) }
              ("test","another string")
              

              The fragment readIORef r evaluates to two different actions within this scope. Either this fragment is not referentially transparent, or r is genuinely mutable. My interpretation is that the fragment is referentially transparent, and that r refers to a single mutable storage location; the same readIORef action applied to the same r results in the same IO action on the same location, but the value can be mutated.

              1. 1

                The value has been replaced with another. It is not quite the same thing as mutating the value itself.

          3. 2

            From your link:

            When evaluated, errors cause immediate program termination and cannot be caught by the user.

            That means that soundness is preserved–a program can’t continue running if its runtime types are different from its compile-time types.

            1. 1

              If we have to run the program in order to discover the property, then we run afoul of Rice’s theorem. There will be cases when GHC does not print out <loop> when it enters an infinite loop.

              1. 1

                Rice’s theorem is basically a fancier way of saying ‘Halting problem’, right?

                In any case, it still doesn’t apply. You don’t need to run a program which contains undefined to have a guarantee that it will forbid unsoundness. It’s a static guarantee.

      2. 5

        Thank you for bringing up this point. Unfortunately, “functional programming” is almost-always conflated, today, with lack of side-effects, immutability, and/or strong, static, typing. None of those are intrinsic to FP. Scheme, as you mentioned, is functional, and has none of those. In fact, the ONLY language seeing any actual use today that has all three (enforced) is Haskell. Not even Ocaml does anything to prevent side-effects.

        And you absolutely can write haskell-ish OOP in e.g., Scala. Where your object methods return ReaderT-style return types. It has nothing at all to do with funcitonal vs. OOP. As long as you do inversion of control and return “monads” or closures from class methods, you can do all three of: immutable data, lack of side-effects, and strong types in an OOP language. It’s kind of ugly, but I can do that in Kotlin, Swift, Rust, probably even C++.

        1. 2

          Scheme, as you mentioned, is functional, and has none of those.

          Why is Scheme functional? It’s clearly not made of functions:

          Lisp Is Not Functional

          A functional language is a programming language made up of functions

          What defun and lambda forms actually create are procedures or, more accurately still, funcallable instances

          I would say Haskell and Clojure are functional, or at least closer to it, but Scheme isn’t. This isn’t a small distinction…

          1. 2

            That’s a good point and I actually do agree completely. The issue, I think, is that most programmers today will have a hard time telling you the difference between a procedure and a function when it comes to programming. And it’s totally fair- almost every mainstream programming language calls them both “function”.

            So, Scheme is “functional” in that it’s made up of things-that-almost-everyone-calls-functions. But you’re right. Most languages are made of functions and procedures, and some also have objects.

            But with that definition, I don’t think Clojure counts as functional either. It’s been a couple of years, but am I not allowed to write a “function” in Clojure that takes data as input and inside the function spawns an HTTP client and orders a pizza, while returning nothing?

            It would appear that only Haskell is actually a functional language if we use the more proper definition of “function”

            1. 1

              But with that definition, I don’t think Clojure counts as functional either. It’s been a couple of years, but am I not allowed to write a “function” in Clojure that takes data as input and inside the function spawns an HTTP client and orders a pizza, while returning nothing?

              Hey, the type for main in Haskell is usually IO (), or “a placeholder inside the IO monad”; using the placeholder type there isn’t mandatory, but the IO monad is. Useful programs alter the state of the world, and so do things which can’t be represented in the type system or reasoned about using types. Haskell isn’t Metamath, after all. It’s general-purpose.

              The advantage of Haskell isn’t that it’s all functions. It’s that functions are possible, and the language knows when you have written a function, and can take advantage of that knowledge. Functions are possible in Scheme and Python and C, but compilers for those languages fundamentally don’t know the difference between a function and a procedure, or a subroutine, if you’re old enough. (Optimizers for those languages might, but dancing with optimizers is harder to reason about.)

          2. 1

            That article is about Common Lisp, not Scheme. Scheme was explicitly intended to be a computational representation of lambda calculus since day 1. It’s not purely functional, yes, but still functional.

            1. 2

              If anything that underscores the point, because lambda calculus doesn’t have side effects, while Scheme does. The argument applies to Scheme just as much as Common Lisp AFAICT.

              Scheme doesn’t do anything to control side effects in the way mentioned in the original article. So actually certain styles of code in OO languages are more functional than Scheme code, because they allow you to express the presence of state and I/O in type signatures, like you would in Haskell.

              That’s probably the most concise statement of the point I’ve been making in the thread …

              1. 2

                I take it we’re going by the definition of ‘purely functional programming’ then. In that case, I don’t understand why Clojure, a similarly impure language, gets a pass. Side-effects are plentiful in Clojure.

                1. 2

                  Well I said “at least closer to it”… I would have thought Haskell is very close to pure but it appears there is some argument about that too elsewhere in the thread.

                  But I think those are details that distract from the main point. The main point isn’t about a specific language. It’s more about how to reason about code, regardless of language. And my response was that you can reap those same benefits of reasoning in code written in “conventional” OO languages as well as in functional languages.

                  1. 1

                    That’s fair. It’s not that I disagree with the approach (I’m a big fan of referential transparency!) but I feel like this is further muddying the (already increasingly divergent) terminology surrounding ‘functional programming’. Hence why I was especially confused by the OO remarks. It doesn’t help that the article itself also begs the question of static typing.

      3. 2

        Isn’t there a difference between functional code and side-effect-free code?

        It depends on who you ask to. :)

        You may be interested in the famous Van Roy’s organization of programming paradigms: https://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf. Original graphical summary: https://continuousdevelopment.files.wordpress.com/2010/02/paradigms.jpg, revised summary: https://upload.wikimedia.org/wikipedia/commons/f/f7/Programming_paradigms.svg.

        (reposted from https://lobste.rs/s/aw4fem/unreasonable_effectiveness#c_ir0mnq)

      4. 1

        I like your point about the amount of information in type signatures.

        I agree that the type can’t contain everything interesting to know about the function.

        I do think you can choose to put important information in the type. In Haskell it’s normal to produce a more limited effect system, maybe one for database effects only, and another for network effects only, and then connect those at the very top level.

        So, you can put more in the type signature if you wish, and it can be directly useful to prevent mixing effects.

    3. 4

      Code that relies too much on indirect procedure calls (virtual methods, first-class functions) can be just as opaque as (if not more than) code that heavily relies on mutation. Functional programming fans are often guilty of playing awkward games with the control flow for the sake of the abstractions they know and love. I’m looking at you, lenses and transducers!

      As sources of complexity, mutation is “less bad” than indirect procedure calls. At least in some languages, it is easy to use mutation locally in order to reap global efficiency benefits. With excessively higher-order functions, the tradeoff is the other way around: the control flow has to jump all over your program (including its dependencies), just so that you can write shorter and cuter code for this one small loop.

    4. 3

      Very well written article. Focused, clear, and memorable.

    5. 2

      I have been willing to write about (middle or high) school mathematics on programming for a while.

      If one thinks in terms of (middle or high) school’s set theory and functional analysis, most of the problem I face in my day-to-day job would be avoided.

      When one learn about function on the 7th grade (that’s when I learned about it, I guess), one learns that all the outputs (image of the function) must depend on the input to (domai ln of) the function. And when exposed to function composition and other functional (mathematically speaking) operations, it makes eve more sense.

      The article explains it nicely, but I think this intuition can be simpler developed with a reminder of our 7th-grade math classes. In my point of view, as a side effect, it also teaches the power of types and (custom) data structure (as in Rob Pike’s rule 5 and programming with data) because of the set theoretical talk (domain, co-domain, image).