1. 26
    1. 4

      Unix shell is pretty concatenative in two senses:

      I do find the left-to-right dataflow is more natural for interactively building up programs

      1. 3

        I do find the left-to-right dataflow is more natural for interactively building up programs

        Me too. In Haskell I’ve had a fashion for using >>> for left-to-right composition, and sometimes & (first element is applied to the right, so the right bit is like composition, but with a point on the left). But I often feel I’m swimming against the tide.

        Recently and unrelatedly I started writing some algorithms “backwards” because my data structure is a tree but with the root being a sink rather than the source. It reminded me a little about having to read composition pipelines backwards. I started to notice how many different times you have to flip between right/left and left/right even in the same program (do notation, infix applicable operators, type signatures…)

        1. 1

          I started to notice how many different times you have to flip between right/left and left/right even in the same program (do notation, infix applicable operators, type signatures…)

          I’m not really a haskell programmer so I might be out of my depth, but I found some of the discussion about forward pipelining a bit funny. I saw people say that & was a bad idea and people need to get used to the dot operator. (Some people going as far as saying that the people who want to do pipelining need to learn math.) However, the bind operator is a bit like a pipeline operator, right? Data on the left, function on the right. Somehow that’s acceptable, but the & operator isn’t?

          Culture is culture, of course. And it is good to fit in with the community’s style. It just struck me as an odd justification.

          1. 3

            To be honest I almost never use >>= or >=> and prefer =<< or <=< in most cases to get a more natural right-to-left pointfree data flow. I find this more natural in Haskell because the arguments you elide with eta elision would have been on the right, that’s where you would put them to eta expand, so that’s where the data naturally enters and allows switching when needed.

            Of course there is no right way and if someone prefers >>> that seems fine, just not what I would do

      2. 2

        I do find the left-to-right dataflow is more natural for interactively building up programs

        Me too. I use left-to-right pipeline and function composition operators when I do functional programming, the reverse of the $ and . operators in Haskell. It feels natural for interactive and live programming, since I type left to right, and I’m adding transformations to the end of the pipeline as I write the program. I guess it’s important for your language and libraries to use consistent conventions in order for this to feel natural. Almost all of the infix operators in my language are left associative, so infix operators naturally form pipelines with data flowing left to right.

    2. 2

      Hm, i don’t see any good argument why it matters?

      1. 1

        One reasonable line of argument is that if categorical presentations are interesting, then concatenative languages give their family of possible grammars. This isn’t just theoretically interesting, but has been used; Compiling to Categories is a common recent paper to cite.

    3. 2

      But if you tilt your head a little, you can see them as functions too: values take no arguments and return themselves.

      This seems a bit circular to me. The author says 2 :: ∀A. (A) → (A, int), but doesn’t actually define int, and defining things this way implies there’s some ‘actual’ 2 that the 2 ‘function’ is returning.

      1. 3

        Yeah, I think that’s misleading. I would say:

        literals take no arguments and return a value

        There is an actual ‘2’ on the stack, and it’s distinct from “a function that pushes the value 2 to the stack”. The notation 2 can mean different things, depending on context:

        • the value 2
        • a one-element stack containing the value 2
        • a function that takes a stack and pushes the value 2
        • a program that evaluates to a function that pushes…

        The function doesn’t push itself: it pushes a value.

        1. 1

          This seems like a much better way to think about it.

    4. 2

      Excellent. I bailed out a little ways in, once I realized that we were talking about point-free programming.

      In my mind, point-free is where you to end up. Until my code has both the appropriate strong types in places it needs them and runs point-free, I’ve got more coding or cleanup that needs doing.

      I will come back and bookmark, maybe save this for my personal notes. It’s something we functional coder types must do a bad job of explaining, since there’s so much functional code that could read this way but doesn’t.

    5. 1

      I remember when this or some similar discussion came up years ago, making lots of claims that there were things that were only possible in concatenative/stack languages and particularly languages like Haskell could never do the same things. The comments on r/Haskell had a well typed implementation of the whole post in only a few dozen lines, and because of the types it was much more difficult to write incorrect code. IIRC they even has nearly identical syntax, with the addition of a begin and end keyword to inject a () into the start of the process. Looks like one of the posts was https://www.reddit.com/r/haskell/comments/ptji8/concatenative_rowpolymorphic_programming_in/ which has this quote from #haskell

      <roconnor> @quote stack-calculator
      <lambdabot> stack-calculator says: let start f = f (); push s a f = f (a,s); add (a,(b,s)) f = f (a+b,s); end (a,_) = a in start push 2 push 3 add end
      
    6. 1

      The diagrams remind me of blockly https://developers.google.com/blockly/

Stories with similar links:

  1. Why Concatenative Programming Matters via SeanTAllen 9 years ago | 21 points | 2 comments