1. 34
    1. 10

      A thing I think about a lot is that this Python:

      def f(arg, kwarg=1):

      Is very similar to this JavaScript:

      function f(arg, { kwarg = 1} = {}) { }

      And weirdly, this shell:

      $ f --kwarg 1 arg

      And this URL: /f/arg?kwarg=1

      And this XML/HTML/JSX: <f kwarg="1">arg</f>

      1. 4

        PLs optimize for human editing and comprehension while remaining parseable by machines (most PLs use some sort of grammar that requires a decent amount of machinery!). OTOH, XML-ish markup’s homogeneous structure optimizes for machine comprehension while remaining editable and understandable by humans.

        S-expressions are extremely interesting because they do very well on machine comprehension while not requiring too much of users.

        1. 2

          I think shell is interesting because it was designed to optimize time physically typing on the keyboard, but it’s still decently readable. Given a choice between seq 10 | xargs -I _ printf '#%s.\n' _ and print('\n'.join(f"#{i+1}." for i in range(10))), they’re both kind of awkward, but the shell version is probably somewhat less so.

          1. 3

            I guess I’ve come to see the the typing-optimization focus as a balancing act if not a footgun?

            It’s fine for the CLI itself, I guess, but one of the things that can make shell suck from a maintenance perspective is the amount of memory/experience/reference-work necessary to read and understand a script that leans on opaque hard-to-search sequences of characters like a lot of short flags, combined short flags, command-specific languages like awk/sed/jq/find, complex shell redirections + parameter substitutions, etc. Better velocity today in exchange at the expense of extra time if you have to understand history/logs/scrollback/scripts.

            But when I read this, I also feel like it rhymes with something that I find intriguing about shell: if I close my eyes to the language’s warts, it feels fairly close to being a nice toolkit for building humane DSLs. I braindumped a little about what I mean in a gist: some sort of hierarchical shell-esque language?

          2. 1

            I think both of those are pretty confusing. And they don’t even do the same thing (the python one doesn’t end with a newline).

            ‘\n’.join(x) always seems weird to me, and then it’s not very obvious what range, seq or xargs do.

            Julia is quite a lot clearer, imo:

            for i in 1:10 do println(”#$i.”) end

            Everything is pretty guessable. I think the main confusions would be about the string interpolation (and even that would be obvious if the example didn’t happen to include a # immediately before).

            Joining the string before printing is not nearly as clear, but still better than the xargs or python examples, imo.

            print(join(”#$i.\n” for i in 1:10))

    2. 4

      An interesting property of syntax similar to lisp or XML is that because most of the nodes in the syntax are wrapped in a container tag you can more easily expand the syntax to accommodate for extra attributes and metadata (e.g. you can have extra attributes on XML tags, or child elements within an element to encapsulate metadata related to that specific part of the program).

      The super lean and clean syntax that often lacks wrapper blocks reads nicely and looks pleasant but the downside is that it leaves no room for associating any extra additional information or metadata to that specific part.

      This comes down to somewhat of a fundamental trade-off in syntax design. A verbose and noisy syntax is more difficult to read and looks intimidating but it can capture information more easily which can lead to better programs (i.e. more information and metadata for the compiler to work with and warn you about problems, etc…).

      A lean and clean syntax looks delightful and reads easily but it becomes somewhat hostile to the capturing or representation of extra additional information because there’s no way in the syntax to include any kind of child element or extra key-value attributes.

      You can see people try to break out of this limitation by inventing things like special tags in code comments so a documentation can be auto-generated from the code. And they have to resort to hacks such as leaving comments above a particular function with special “@return”, “@param” keywords.

      Whereas with an XML-like syntax you could more naturally capture that information with dedicated tags within the code. Even though it doesn’t look particularly nice to read.

      1. 3

        Perhaps the middle ground is to systematically reserve extension points in the syntax. For example, in build2 every entity (target, prerequisite, variable, value, etc) can be preceded with attributes enclosed in [ ]. Compare:

        [string, visibility=project] var = [null]


          <variable type="string", visibility="project">var</variable>
          <value null="true"/>
        1. 6

          Yes that also seems great.

          I think a huge portion of problems in programming languages eventually boil down to this limit of expressing extra information in the syntax.

          This results in a limited number of attributes getting “first class support” in the syntax and then everything else gets neglected.

          For example when it comes to name bindings or variable declarations most languages end up with something like “const” for constants, or “mut” to describe a mutable binding, and/or other keywords such as “private”, “public”, “protected”.

          Then you are somewhat locked out of extending that syntax or expressing anything more elaborate than those basic ideas.

          If the syntax is “metadata-friendly” it opens the door to expressing lots of extra useful information that can be used by the compiler or other tooling such as for automatic documentation generation.

          Just to name a few examples let me mention some of my wishlist items.

          • I’d like a way to tag the classes/functions in the code and then be able to search for them using those tags. Very useful in a large codebase.

          • Comments and annotations as a first-class feature in the syntax

          • Pre-conditions or post-conditions before/after blocks/functions to act as guards/assertions

          • Ability to explicitly link a test to the thing that it is supposed to be testing, and then being able to query that list. For example the ability to view “all functions tagged with security that have no linked tests associated to them”.

          Lots of possibilities!

          1. 3

            Maybe join me on matrix? Datalisp.is work towards this end but it’s boring to work alone.

            1. 2

              Thanks I’ll check it out

        2. 1

          PowerShell does this too! It’s really nice.

    3. 3

      Traditionally the way to denote symbols isn’t :, it’s ’ (a single quote to the left).

      After a while, that became tedious and we came up with shortcuts. For example, (set 'foo 23) can be shorthanded as (setq foo 23) and (fset 'foo (lambda () 23)) can be shorthanded as (defun foo () 23). And '(foo bar baz) denotes an entire list of quoted symbols.

      (Of course Scheme then came along with fewer namespaces and its own set of abstractions.)

      In other words, I disagree with the colon-prefix idea.

      1. 3

        Which tradition is this? In lisp, ' doesn’t denote symbols; it quotes. a is the symbol whose name is "A"; 'a is syntactic sugar for the list whose second element is the aforementioned symbol, and whose first element is the symbol named "QUOTE"; the latter can therefore also be written (quote a). Notably, 'a evaluates to a.

        In common lisp, moreover, a : prefix is used as sugar for symbols in the KEYWORD package. Hence, :a is sugar for keyword:a.

        All this aside, it seems strange to ‘disagree’ with a syntactic decision only because it does not square with tradition.

        In lesser languages, the character ' is often used to quote strings rather than special forms; something else is therefore needed to denote symbols. The tradition of using : originates I think in ruby (I don’t know squat about ruby; CMIIW), but it is rooted in tradition, as : was used to denote symbols in lisp.

        1. 3

          Which tradition is this? In lisp, ’ doesn’t denote symbols; it quotes. a is the symbol whose name is “A”; ’a is syntactic sugar for the list whose second element is the aforementioned symbol, and whose first element is the symbol named “QUOTE”; the latter can therefore also be written (quote a). Notably, ’a evaluates to a.

          I know how it works.

          The context was “non-evaluated data”, not “interned strings”.

          Obviously language is insufficient which is why I went on to also include examples such as set and fset.

          In common lisp, moreover, a : prefix is used as sugar for symbols in the KEYWORD package. Hence, :a is sugar for keyword:a.

          I know that.

          All this aside, it seems strange to ‘disagree’ with a syntactic decision only because it does not square with tradition.

          That’s not the reason. The two reasons I tried to convey was that

          • we do have a way to denote non-evaluated data but we’ve still found reason to use sugar like defun, define, let, lambda where you do need to rely on context to separate names from data. It’s also less confusing to many people—don’t shoot the messenger on that, I’ve just seen many people be less confused about (setq foo 23) than about (set 'foo 23).
          • : isn’t what’s been used, : means something else in sexps (keywords).

          I have another reason too but I didn’t put it in my post. I’ll mention it here. I really hate how language designers litter grawlix crawling all over the syntax. It breaks with these glyphs’ historical meaning, it’s difficult to remember for those with more aural memory, they’re annoying to type, and they’re difficult to see for some categories of vision impairment. I wish designers would stick to parens and alphanumerics.

      2. 2

        I dunno, I dislike that you have to know for every Lisp function whether it unquotes or not. Using colon as the universal sign for quoted makes it much more clear what’s going on.

        1. 3

          It’s just something that becomes unworkable in practice as you’re requoting and unquoting data constantly.

    4. 3

      In those kinds of posts, everytime that APL and APL-like are showed is for tersness but not really about their syntax rules. Syntax of APL/J/K/BQN and Q also go further than simply than ternsness because every function is one-character symbol.

      Mandatory : https://www.jsoftware.com/papers/tot.htm

      • Limited number of arguments (0,1 or 2) by function/operator come from the basis of the reflection to be used as notation first coupled with right-to-left absolute precedence order. At the end of the day, any interaction is a form of (x F y)
      • This allows to build the concept of trains (hooks and forks) but also make the language ultra regular (as much as lisp imho in the basic forms) allowing also for the tacit programming aspect of it (for APL and J at least).
      • Maybe I am wrong here but I wish I could see more array-programming language like syntax coming to other types of PL concept. Like how would TCL looks like if bundle in the mold of APL/J syntax?
    5. 3

      Interesting that the author dislikes the Manatee example to get prime numbers - it’s amazingly clear, certainly clearer than the equivalent code in mainstream languages like Java, Python, Ruby, or JS. The only unclear bit was for each d in 3 to n - 1 by 2: minus seems to have higher precedence than division, which is surprising.

      The appeal to authority with the dictionary entry wasn’t really helpful either. SQL is a clear example of where an English-like language (although much less so than Manatee, based on the example) has been basically impossible to de-throne.

      1. 3

        I agree citing New Hacker’s Dictionary is worse than useless. It focuses on writability (by unskilled people and hackers), but as you pointed out, the main advantage of natural language syntax is readability.

        Inform 7 is a great example of advantages of natural language syntax.

    6. 2

      An important nit on the second image of S-Expressions. (= p (cos 5)) should be (= :p (cos 5)). It is hard concept for some that assignment into a symbol just doesn’t care about its previous value.

    7. 2

      Cleaner method I like is Rebol. For example…

      f: function [x y] [
          b: 3
          if x < a * (b + 7) [
              ;; ... 
          a / 5

      Everything within a [block] is unevaluated. I could have created the above function like this…

      params: [x y]
      body: [
          b: 3
          ;; ... rest of code
      f: function params body

      or perhaps a clearer example is…

      if-true: [print "TRUE"]
      if-false: [print "FALSE"]
      either 1 = 1 if-true if-false      ;; => prints "TRUE"
      either 1 = 0 if-true if-false      ;; => prints "FALSE"

      see: fexpr - https://en.wikipedia.org/wiki/Fexpr

    8. 2

      The off-side rule is, I would argue, the most popular form of visual programming. I keep meaning to try Wisp or other indentation-first S-expression schemes, since I find indentation essential to keep the structure straight, even with a good paredit mode.