1. 9
  1.  

  2. 10

    You don’t want to require programmers to have a degree, so C++ is out.

    As a Haskell programmer, this made me laugh.

    1. 8

      Leaving questionable but ultimately superficial syntactic choices aside, it worries me that language designers still repeat the mistake of making all composite types reference types. This seems to come from conflating two distinct distinctions (with apologies to Conor McBride):

      • Values vs. references, a semantic concept.
      • Absence vs. presence of indirection, which in any high-level language is an implementation detail.

      The choice between values and references should be dictated by the needs of the problem domain. If the physical identity of an object in memory is irrelevant to what you’re trying to compute, chances are you want to use values, not references.

      On the other hand, the decision to use indirection should be dictated by efficiency considerations. If a large data structure needs to be frequently passed around between multiple places, it is more efficient to just pass around a pointer internally. Even if you don’t really care about the data structure’s physical identity.

      1. 3

        This sounds like an interesting point but I feel skeptical. Two things:

        • The value of making all composite types reference types is that the distinction between value and reference disappears. This is a simplifying act of language design. There is, sure, a performance cost (the “efficiency considerations” escape you), but efficient functional languages have proven that it can be fairly reasonable – although they all are moving to giving more opt-in control of data representation for advanced features, in particular removing indirections.

        • I don’t think that the two distinctions are independent. For example, if the values have mutable components, then copying them or passing an reference/alias to them is a distinction can be observed. The semantics of mutations that I know of all use indirection in some for or another. How would you do mutation without indirection?

        You say that value vs. references is a semantic distinction. But can you actually define the semantics of those notions?

        (Interestingly, you may a comment on the importance or not of identity, but the languages I am most familiar with that take the all-indirection route, namely the ML languages (Haskell included in this context), on the contrary emphasize immutability, which makes identity irrelevant. Is there a conflation here?)

        1. 2

          I’m not exactly sure we’re using the terms “value” and “reference” in the same sense. To avoid further confusion, and since I know you’re an important member of the OCaml community, in this reply I’ll use the terms “value” and “reference” as in ML.

          • I agree with ML’s stance on this: Values are the primordial notion, and references are just one particular type of value. (Moreoever, Standard ML’s treatment of references is cleaner than OCaml’s, since, in the former, references aren’t identified with particular mutable records, and mutable records themselves don’t even exist.) However, I tried to word my original post in terms of a Java/.NET-like distinction between value and reference types.

          • I can think of one scenario (possibly the only one) where references don’t require indirection: When a reference is created inside a function’s body and doesn’t escape from it, you can embed the mutable cell directly in the function’s activation record.

          As for how I would define “value” and “reference”:

          • A value is anything that a variable can be substituted with.

          • A reference is a label associated to a value during the current evaluation step. There is an infinite supply of labels that can be used to create new references. Labels are abstract, but label equality is decidable. (I’m basically restating TAPL p. 166.)

          1. 2

            I had the “mainstream” (eg. PHP) usage of the term in mind, the one that opposes call-by-value and call-by-reference, but thanks for the clarification – it’s is good to avoid misunderstanding, and I am sure others will be interested by the parenthesis.

            I have the impression that your definition of reference indeed is rather close to what one could call an indirection; you could say “address” instead of “label”.

            Is your example of non-escaping reference really an example of “no indirection”, or a distinction between stack vs. heap allocation? Suppose I extend the model by allowing to pass my local reference to functions that I know do not capture it (reasonable example below); the optimization still applies, but those functions presumably expect my local reference (or in a call-by-reference term “my variable/array/struct passed as an alias rather than a copy”) to look like any normal reference, so I would (in implementation terms) actually pass a pointer to the stack – so there is again an indirection.

            Array.iteri local (fun i v -> if ... then local.(i) <- ...)
            
            1. 1

              Of course, you’re right that translating functions taking reference arguments to RAM machine code pretty much requires using indirection.

      2. 10

        Suppose you want to write a new program, something like a text editor. What language would you write it in? It has to be as fast as possible, so interpreted languages are out.

        I don’t understand this. Text editors run interactively. Interpreted languages have been sufficient for decades.

        Emacs and elisp is the paradigm. The most efficient code can benefit from being compiled (buffer operations, etc.) but the majority of the code can be interpreted from source or interpreted from an intermediate representation (to avoid processing the source more than once.)

        1. 2

          Decided to put it to the test using an interpreted language not known for highest efficiency. The Padre app is an IDE and text editor with all kinds of features:

          http://padre.perlide.org/site2/features.html

          It’s written in Perl. The app took a few seconds to start up on the first go. Past that, it handled the files I opened and the edits well on a machine with a Celeron processor. Refutes the claim on interpreted languages & text editors.

          1. 4

            The app took a few seconds to start up on the first go. Refutes the claim on interpreted languages & text editors.

            Pick one.

            1. 5

              “Past that, it handled the files I opened and the edits well on a machine with a Celeron processor. Refutes the claim on interpreted languages & text editors.”

              Pick both. Part you cherry-picked happens with many native programs first time they’re loaded into memory. Even more if that’s the first load after installation like mine. Interpreted programs with huge interpreters (eg Perl programs) are also often slower on initial load than tiny, native editor. Leaving it there, his claim “interpreted languages aren’t suitable for text editors” contradicts the experimental result of “huge interpreter might have caused slower loadup on first use followed by showing acceptable speed when editing, opening files, etc from there.”

              Original claim is still refuted.

              1. 3

                lf we’re nitpicking, the original claim is simply that “it has to be as fast as possible, so interpreted languages are out.” That interpreted programs might require huge interpreters is irrelevant–if his language makes it possible for him to write a text editor faster than yours (and Vim, for example, doesn’t take a couple seconds to start) then his claim is very much unrefuted.

                1. 2

                  I’ll accept that counter. Unless the interpreter is a hardware coprocessor like Vega3 for Java apps. ;)

          2. 1

            I don’t understand this. Text editors run interactively. Interpreted languages have been sufficient for decades.

            Umm.. Atom vs Sublime Text 3?

            1. 2

              As a counterpoint:

              I’ve used vim, textmate2, sublime text 2/3, and visual studio code.

              vim is, of course, the fastest (in a terminal - the windowed version can take a second to start). ST3 is the next fastest.

              VS code is a few seconds to start - unsuprising as it’s an electron app (written in the exact same tech stack as Atom). It never slows down while I’m typing, even with many plugins installed. It’s usually more responsive than Sublime Text (ST3 runs extension code on the main thread, VS code backgrounds them).

              1. 2

                I haven’t used either, could you explain your comment?

                1. 2

                  They’re both very popular text editors. Atom is written in Javascript & other web technologies. Sublime Text is a mix of C++ and Python. So, I don’t know if it counts given core functionality (aka performance-sensitive) might be C++. Another consideration is Atom has benefit of a JIT. That might not disqualify claims about interpreted languages, though, if we just consider that an implementation decision for interpreters. Such things are open to debate.

            2. 1

              I don’t think emphasising the syntax of a language makes it easier to read. I think it does quite the opposite. For example idomatic python doesn’t really read like code. It uses the langauge structures of python as part of an expression of the meaning of the code. For instance if you had:

              for account in database(outstanding_payments=True):
                  print(account.name)
              

              reads like “for every account in our database that has outstanding payments print the name of the account” which is perfectly readable. On the other hand:

              FOR account IN database
                  IF account.hasOutstandingPayments() 
                      IO.print(account.getName())
                  }
              }
              

              I don’t see this example as being more readable or maintainable. I don’t know why but I think the added complexity provided from named params and the way the keywords flow with the rest of the code is an important piece of why the python code looks nice.