1. 18
  1. 42

    This is a weird and disjoint, buzzword-slinging article, and I’m not convinced that the author had a coherent sense of who specifically he was criticizing when he made swipes at “San Francisco” or “Wall Street” or “FP drama” over programming-language design. Seriously this article throws around a lot of buzzwords and uses them or criticizes them in ways that don’t make a whole lot of sense to me.

    I think the gist of this is an argument that the basic units of computation that human programmers work ought to be the function and the monad, but instead, for legacy/vendor lock-in/institutional inertia reasons, they are the program and the operating system. The “monad” here being an abstraction over anything that takes data out of one function and transforms it to be passed into another function - including possibly things like network infrastructure for functions that run on different machines in different parts of the world. And that’s actually an intriguing idea - I’ve definitely seen the criticism, on lobsters as well as elsewhere, that the UNIX-based notion of an operating system that runs individual programs that share data via serialization/deserialization is outdated, poorly-suited to the modern hardware world, and in need of a conceptual update.

    That screenshot of what looks like a web browser window with “Multix tioga::runtime” is intriguing, especially if what it’s doing is somehow allowing code in Rust, Javascript and Scala to freely call each others’ methods. I can’t tell from the article or some cursory googling how to get that and play around with it - anyone know?

    1. 27

      The biggest difference between FP (functional programming e.g. function-oriented) and OO (object-oriented) is that Silicon Valley jumped into OO with almost no mathematical foundation… and got clobbered.

      Unlike NYC, the west coast is historically not a fan of mathematics for good reason — vendors are not about to see their beloved software packages reduced to Kleisli shell scripts, nor let their profitable DevOps cloud platforms collapse into Kan agents.

      Oh my. There are a lot of good reasons why FP did not really get off the ground, and it is not a big conspiracy:

      • The underlying architecture of machines is imperative.
      • The underlying architecture of machines is impure.
      • There is a large body of imperative programs that need to be maintained.
      • Education programs have been focused on imperative languages.
      • For many people it is more difficult to write pure functions.
      • For many people it is difficult to work with laziness.
      • Category theory is too abstract for a lot of people.
      • Languages that mix OO and FP can reach C++-like complexity.
      • And yes, functional programming is sometimes discouraged by management because it is hard to find programmers.

      We had a reading group at work, where we read some Haskell books. Every week people dropped out, until just two persons were left. Most people didn’t care, just wanted to get work done, or found FP intellectual air guitar. Some were probably convinced that in theory FP was better, but if you have to ship that research paper by next week’s conference deadline and Python + numpy gets the job done, why fight with data streams, doing your linear algebra in state monads, etc.? Half a year later or so, we did a Go reading group, most people pulled through and some are still writing Go today. Why? Go is a gentle transition from Python, the benefits of learning Go were tangible from the first week.

      FP has a lot of benefits. The problem is that FP (outside more practical traditions such as OCaml, F#, etc.) is often advocated with a ‘holier-than-thou’ attitude (side effects are evil, X just uses an inferior monad abstraction) and that FP languages are simply to different for many. Most people are busy enough solving their daily problems in languages that they are familiar with. To entice them to use something else, the benefits need to be large and crossing over should be easy.

      1. 13

        intellectual air guitar

        Brilliant expression.

        1. 4

          Oh my. There are a lot of good reasons why FP did not really get off the ground, and it is not a big conspiracy:

          I think there’s one more that’s pretty subtle: time is imperative. By that I mean that time naturally flows into next-states in a way that every input and output to your system defines a new state in the behaviour. You can easily emulate that in FP of course, but a lot of what we see as “change” maps more intuitively to this notion. Unfortunately most imperative languages don’t seem to capitalize on this, but it’s still implicit in them.

          Then again I’ve been neck-deep in temporal logic for a while now so maybe my brain is just warped by that.

          1. 3

            every input and output to your system defines a new state in the behaviour

            Exactly… like… a… function??

            1. 2

              I had a snarky retort lined up, but I figure it’s better to just be up front: I think at this point it’s clear that I don’t like you, and I’m pretty sure you don’t like me either. So I don’t think that either of us would actually enjoy a conversation on this.

              I think that seeing the world through the lens of temporal logic provides insights you can’t easily capture with just calling it a function. If you’re interested in TL, I’d recommend this source (requires a Safari subscription, which I think you have).

              1. 4

                I think at this point it’s clear that I don’t like you, and I’m pretty sure you don’t like me either

                I think you’ve said said silly things but that doesn’t mean I don’t like you. My family say sillier things all the time. I like them!!

                temporal logic provides insights you can’t easily capture with just calling it a function

                I think Heraclitus provided enough insight 2500 years ago: “You could not step twice into the same river.”

            2. 1

              This feels like a stretch to me. One major property of time is that it (seems) to be continuous. Statements in an imperative language (and their corresponding effects) are discrete, though they are ordered in time. You can overlay an imperative sequence on a timeline but it doesn’t make time itself an imperative sequence.

              1. 1

                I mostly think about time from a temporal logic perspective these days. While time is continuous when, say, doing calculus or mathematical analysis, when it comes to representing and discussing time as a mathematical abstraction in and of itself, discrete logics seem to be much more widely used and useful than continuous logics are. That’s why I’ve started thinking about time itself as discrete.

            3. 1

              You’re not wrong, but it’s also true that rewarding endeavors are often difficult. I’m skeptical that FP can ever be made both hugely advantageous and easy.

            4. 6

              This article is either insane or brilliant. I can’t tell which, but it’s very interesting.

              1. 1

                A good description. Maybe it is both brilliant and crazy at the same time.

                I’m not into functional programming but it still feel I might have learned something from it.

              2. 5

                “Function calls are the hardware implementation of “lambda calculus”, which dates back to the 1930s.”

                It’s distressing that so many people believe this kind of stuff is mathematically literate or historically accurate. http://www.laputan.org/pub/papers/wheeler.pdf

                1. 2

                  Short of me having to go and read this paper, can you explain why that short summary is inaccurate?

                  1. 2

                    The paper is a 1952 description of “subroutines”, introduced on pure engineering grounds by people who understood what mathematical functions are, but who didn’t have any interest in lambda calculus. The concept of “functions” in mathematics predates the lambda calculus by centuries. You’d learn a lot more about function calls from a computer architecture class than from the lambda calculus.

                    https://aiplaybook.a16z.com/reference-material/mccarthy-1960.pdf

                  2. 1

                    A lot of people believe that hardware could be infinitely better but we build for C. They understand the software, understand why it’s bad, and don’t understand the hardware, thus they can explain why the hardware is artificially restricted.

                  3. 3

                    I’m not entirely sure who this article is written for or what it’s trying to achieve. It heavily uses FP jargon but then goes in and insults the very people who are most likely to understand it.

                    Language interop is a noble goal to strive for, but I’m not sure whether a new operating system is how you would fix that. Different languages have very different calling conventions - C and Python could not be more different, and I think that even C++ and Rust use CPU registers in different ways whenever you call a function.

                    1. 9

                      I’m not entirely sure who this article is written for or what it’s trying to achieve. It heavily uses FP jargon but then goes in and insults the very people who are most likely to understand it.

                      I felt the same way scanning it for a time, but by the time I got to the end it had won me over. So maybe it’s written for me?

                      I think a lot of people in the FP crowd tend to say very simple things in a very complex way – but I admit even though I think these things are simple, I don’t know how to explain them in a less complex way that preserves the accuracy of the FP explanation!

                      Obviously analogies can be made “simple” explanations, but there’s always some subtle gap that if you nit too hard you get shit on.

                      I keep in the back of my mind that FP has that going for it, and as we struggle to understand how to best tell computers what we want them to do, FP offers a much better jumping off point than starting from register models or NAND gates (or the universal turing machine). But can we do better?

                      I think so, and since I don’t see many research efforts in this space, this type of article provided a certain about of reassurance to that.

                      I’m not sure whether a new operating system is how you would fix that. Different languages have very different calling conventions

                      Sure, generally this kind of glue is called FFI, and it’s either implemented with some kind of trampoline code assembled at runtime, or an IPC. Most of what you’re thinking about is in the former, but SQL and databases typically fall into the latter (and yet aren’t required to! see sqlite).

                      However in multics and VMS the typical strategy for a module wasn’t to have a main() function, but to have several main() functions exposed. It’s a little like having every function marked int f(int argc,char *argv[],char**envp) and simply being able to call any of them, but none of these main() functions parse commandline switches (or roll an IPC, except for interop. with other systems), because your data is also tagged (or boxed) whenever you cross one of these boundaries.

                      Languages with concrete data types (C, q, maybe Java?) this is easy, but algebraic types throw a massive wrench into it because now the algebra needs to have equivalence across the boundary, and it doesn’t: Consider a json document that you’ve “parsed” with Haskell– there’s no way to hand over that (parsed) document type to OCaml without the same boxing required to speak to C. It’s bonkers! Having introspection like JavaScript and Python and Erlang feel like they get you further because the middleware can poke around at the types (does it have a .then method? it’s a promise!), but this too is a lie: how can you pass that closure over the network, and what happens if when it’s there it tries to edit the scope in the original process? (q avoids this by simply not having closures, but it’s a bit unsatisfying…)

                      Dot-Net and Powershell look like a good move in this direction, but “everything needs to be dot-net”, and as great as F# is, talking to the rest of the dot-net world brings back the boxes…

                      … maybe a new operating system is needed?

                      This isn’t so strange: Dot-net is basically it’s own operating system; Erlang’s VM is basically it’s own operating system. With a broad-enough definition of “operating system”, this line of thinking definitely makes sense, so maybe for the traditional definition of an “operating system” this can make sense too.

                      I’m not yet convinced of this, but I’m definitely going to keep my eyes open.

                      1. 3

                        As someone who understands FP jargon, I’m confident the author does not.

                      2. 2

                        What is Functional Programming all about anyway? The initial attraction is that writing functions is easy…

                        I’m not an FP expert, but that statement seems a little off. A few similarly suspicious statements and I pretty much tuned out. The “Pulling Rabbits out of Hats”, and “Topology” sections both seem confused at best and uninformed at worst. (“Rabbit” queue? Processor allocation? Hrm.)

                        The “Serverless vs Functional Programming” section is another good example.

                        The act of “unbundling” functions (lambdas) from their traditional containers is really what Serverless and the Functional Programming movements are trying to do. The main difference is that Serverless sells you on Kubernetes / DevOps / AWS while the FP community prefers to stay away from vendor buzz.

                        Huh.