1. 11
  1.  

  2. 12

    Perhaps I’m just slow, but this never seemed to answer the question posed in the title.

    It kinda reminded me of everything negative I’ve come to associate with Haskel posts–bombast, confusing code with no obvious practical application, and a purposeful aloofness of the realities of software engineering.

    Software doesn’t have to be so complex if we challenge our assumptions that got us into this mess

    Assumptions like “my program should be able to have side-effects outside itself, since that’s literally the only reason I would want to run it.” I’m all for challenging assumptions, but to do that you need to lay them out more clearly and then show how by changing them you get to new places of value–which I didn’t pick up on here.

    I believe explicitly reading input and writing output will eventually become low-level implementation details of higher-level languages, analogous to allocating stack registers or managing memory.

    Unfortunately, decades of experience has taught us that managing memory is entirely too important to be left to the runtime without forethought.

    1. 1

      This was my reaction too when reading this post.

      I also found the quote below from the post linked in the bottom of the submitted post interesting:

      [Backus] told us how the sequential/imperative model enslaves us in weak thoughts, inhibiting manageable scaling

      [Quote from talk by Backus]

      As well, Backus showed one way to throw off the shackles of this mental and expressive slavery — how to liberate our notions of programming by leaving the von Newmann mental model behind.

      It’s unclear whether the author (Conal Elliott) is writing tongue-in-cheek here, or whether he is serious. To me, using the metaphor of slavery indicates an uncomfortable world view, where programmers not embracing functional programming are deemed as “weak”.

      1. 4

        Metaphor/hyperboly aside, Backus was mostly advocating against the “von Neumann mental model”, which I (perhaps mistakenly?) think of as meaning ‘first-order’ programming, i.e. programming with “values” (machine words, pointers, etc.) rather than ‘higher-order’ programming, where we program in terms of programs. He suggested an alternative, dubbed “tacit programming”, but wasn’t so much advocating for tacti programming, as against “von Neumann programming”; something “better” may come along, which lets us forget about tacit programming.

        Side note: from my understanding (although I have no source), the von Neumann architecture was proposed as a simple way to get something working, under the assumption that better approaches would be found once we had more experience with building and programming computing devices. It’s only historical and financial inertia that’s kept it around this long. In which case, Backus’s proposal of tacit programming is basically a software equivalent of von Neumann’s hardware proposal: here’s a suggestion, which can be replaced once we understand more.

        Hence I think it’s unfair to infer the sentiment ‘programmers not embracing functional programming are deemed as “weak”’; I think it’s fairer to infer ‘programmers embracing von-Neumann programming are deemed as “weak”’.

        Note that I’m not claiming whether or not that’s true; only that it seems (to me) to be less of a straw man. The latter version at least has one redeeming feature over the former: it’s not saying “ is a silver bullet”, instead it’s saying “ is restrictive and problematic”. To make this criticism more constructive, an alternative is proposed (not necessarily as “better” or “best”; but certainly as “a concrete example of something different”). In Backus’s case it was “tacit programming”; in Elliot’s it’s (a restricted form of) “functional programming”.

        Note that “functional programming” isn’t actually a solution to Backus’s problem, since we can easily write functional programs which are ‘first-order’: I’d say most functional programs are written this way! Yet functional programming does allow higher-order programming (in the same way that C allows OOP, but in no way guarantees it).

        The most common form of higher-order/tacit programming is “point-free” (also known as “pointless”!) programming.

        There are more radical alternatives, which fully embrace the “high-level” approach, like concatenative programming languages. Some of these are described as “functional” (e.g. Joy), but they’re actually very different from most functional languages; they’re based on composition of functions, rather than function application (there’s nothing to “apply” the functions to, since the language doesn’t deal with “first-order” values!). Concatenative languages also don’t map so straightforwardly to the usual mathematical models underlying other functional languages, like lambda calculus or SK combinatory logic. There are alternative models like “concatenative combinators” which more closely match their semantics.

        Even then, some concatenative languages aren’t “functional” at all. Forth is the most obvious example!

    2. 4

      Our programs don’t necessarily need input. Our programs don’t necessarily need to do any processing. But programs MUST do output, otherwise there’s no point to the program (even an infinite loop doing as must as possible except for output can still be useful if you use the waste heat from the CPU to heat the room, but then I would contend that there is still output).

      1. 3

        I fail to see, with what was presented, how importing data as a module obviates the need for program-level I/O. By my analysis, by doing this you’ve now offloaded the job of converting the data into something you can work with in a program to the user, and only within the scope of a configuration language that is not Turing complete.

        It’s a decent use case, but it’s a massive leap in logic to say you don’t need program-level I/O based on that.

        1. 3

          I think these ideas actually map very closely to the Nix language. Nix is a pure functional programming language, with no I/O, where a program is an “expression”, which we can “evaluate”. Some expressions evaluate to, for example, an integer. Others evaluate to a boolean. The most interesting are those which evaluate to a “derivation”.

          A derivation is basically a datastructure containing an “environment” (a map from strings to strings), a “builder” (a string containing a filesystem path), “arguments” (a list of strings) and a set of “dependencies” (other derivations). As far as the Nix language is concerned, these derivations are just values like any other.

          By itself this is pretty useless, but there is a tool (nix-build) which will take the result of evaluating a Nix program and, if it’s a derivation, will “build” that derivation. To build a derivation, we first build all of its dependencies (recursively), then we run the “builder” as a program, with the “arguments” passed as commandline arguments, and the “environment” as its env vars. We also add an “out” variable to the environment, with a path as its value, and the result of the build is whatever the builder wrote to that path.

          So why is this relevant? Firstly, the Nix language is useful for describing what to do without actually doing anything. This is actually the same idea as Haskell’s I/O system: Haskell is a pure language, with no I/O, which calculates a single value called main. That value has type IO () which basically means “a program which can perform arbitrary effects”. A ‘separate tool’ (the Haskell runtime system) takes the resulting value of main and runs it as a program. Hence Haskell is like a pure meta-language, used to construct impure programs.

          Elliott actually makes this analogy by comparing IO () values to C programs, and the Haskell language to the C preprocessor!

          So what practical effect does this way of thinking have, if any? I can’t speak for Dhall (the author’s language), but in the case of Nix there is a clear distinction between “eval time” and “build time”. I think this is the key to figuring out how the author can (provocatively) claim to ‘perform all I/O at compile time’: the I/O happens during evaluation, which is basically like an interpreter (if you’re wondering why a compiler would implement an interpreter, consider that “inlining a function” is basically an elaborate way to “call a function” at compile time; and “constant folding” is an elaborate way to “perform calculations” at compile time; the logical conclusion to doing this is a “supercompiler”, which can run arbitrary code at compile time).

          So really, the author is saying we should embrace metaprogramming, to push as many failure cases as possible into a run-at-compile-time language, so that the resulting program (if compilation succeeds) is guaranteed to avoid those problems. This is similar in spirit to static typing (catching errors before running the program), although we’re actually shifting the emphasis: “the program” is actually rather trivial, since most of our code is part of the compile-time language (basically, really extensive macros).

          I have sympathy for this; although there’s a need to more precisely distinguish between the “resulting program” and “the output of the resulting program”. If we perform a bunch of elaborate compilation which results in the number 42, then that is “a program which performs no I/O”, but it’s also not a particularly interesting case. If the result of our compilation is a function, which e.g. counts the number of words in a given string, then that itself might perform no I/O, but it’s not actually useful until it’s invoked by something which does: either a ‘separate tool’ (the equivalent of nix-build or Haskell’s RTS) or a subsequent compilation which “imports” that function.