1. 61
  1.  

  2. 16

    (Reposted from reddit.)

    This is not quite right.

    Parentheses describe a tree syntactically. Evaluation is something you can do to that tree, but it’s not the only thing.

    The linked post describes a mechanism for deriving a tree from a sequence of words—from the semantics of those words. But what’s interesting about s-expressions isn’t that they remind you of the arity or semantics of some symbol (so you don’t have to memorize it), but rather that they create a tree structure that exists wholly apart from any semantics.

    And this, I think, speaks volumes about the ideological differences between lisp and forth.

    1. 11

      This article made me go “Heeeeey, that’s neat!”, and put a smile on my face. Neat!

      1. 10

        Why have I never seen… […] …a Lisp that can drop nested parentheses when the meaning is clear from context?

        Well, doesn’t Scheme already have “I-Expressions”?

        This SRFI descibes a new syntax for Scheme, called I-expressions, whith equal descriptive power as S-expressions. The syntax uses indentation to group expressions, and has no special cases for semantic constructs of the language. It can be used both for program and data input.

        Not the same thing as he details, but it does show that parens aren’t necessary, for anyone curious.

        Oh, and as an alternative, there’s also “Sweet Expressions”, which are a funky hybrid.

        1. 2

          Indentation syntax is close but not quite the same thing. It has a lot of weird edge cases. I do think Sweet-Expressions are cool and underused, though. I’m definitely planning to include them in Tangled Scheme (unpublished WIP Scheme dialect), whenever I get around to finishing it.

        2. 3

          One of the best feelings in the world is when you realize what makes lisp “LISP” rather than just a programming language is code as data, data as code. Because then you’ll see it everywhere, and suddenly you realize everything is a lisp, and all it’s like no one learned the lesson of the turing machine.

          1. 3

            Concatenative languages aren’t a LISP. They’re based on function concatenation, not function application, which is slightly but importantly different than lambda calculus. There’s a good explanation here: http://evincarofautumn.blogspot.com/2012/02/why-concatenative-programming-matters.html

            “Everything is Turing complete” is not the same as “everything is a LISP”.

            1. 2

              (…) what makes lisp “LISP” rather than just a programming language is code as data, data as code. (…) it’s like no one learned the lesson of the turing machine.

              “Everything is Turing complete” is not the same as “everything is a LISP”.

              The reference was to a single tape, holding data and instructions - not to the idea of Turing completeness.

            2. 1

              If everything is a lisp, then lisp is just another programming language. You can’t have both uniqueness and ubiquity.

              1. 1

                This is the exact conflict that I enjoy experiencing :)

            3. 3

              …a stack language that allows parentheses as a sanity check?

              I think webassembly text format has this: https://webassembly.github.io/spec/core/text/instructions.html#text-foldedinstr

              1. 2

                I don’t think left-to-right matters in stack languages really, because it’s just a series of operations happening in order? Just reading 1 2 + makes more sense when you’re thinking about the stack as a data structure than + 2 1

                1. 2

                  Reverse Polish is the native syntax for stack languages because it reflects the order of evaluation. A left-to-right stack language would need a layer that reorders the operations, since in your example the 1 and 2 have to be evaluated (pushed) first, then the “+”.

                  (Unless you tried running the parser backwards over the input, which I guess is possible but weird. And begs the question of how you deal with interactive multi-line input.)

                  Even an infix language that compiles to a stack machine ends up generating byte code in RPN order.

                  1. 1

                    (so called “normal”) Polish Notation is used in Math & Logic quite a bit; Wikipedia even has an article on evaluation. For fixed arity functions, it’s basically Shunting Yard really, nothing too complex.

                2. 1

                  “Syntactic sugar causes the cancer of semicolons”

                  Or in python, the cancer of indentation, colons and “pass”

                  1. 1

                    Hum… I wonder if you had a look on Haskell? Function composition is very elegant in this language:

                    filter . map (+) [1..10]

                    It even have an operator to flip arguments…