1. 13
  1.  

  2. 6

    Most languages with the standard complement of if/for etc which are Turing incomplete do not gain any benefit from this restriction. One exception is in domains where you are proving properties or doing analysis, as two examples: […] Such languages tend to be a rarity in industry. In all the Turing incomplete programming languages I’ve experienced, recursion was later added, programs were simplified, and programming in the language became easier.

    This is the key point: Turing incompleteness is only valuable if you have a plan to use it, not if you’re just assuming “incomplete = better”.

    1. 2

      Hm very interesting… I would summarize this as:

      Turing incompleteness doesn’t have useful engineering properties

      (or at least it’s yet to be shown that it does)

      I “complained” about that here with respect to Dhall: https://lobste.rs/s/gcfdnn/why_dhall_advertises_absence_turing#c_ubbne4


      This is similar to my claim that context-free grammars don’t have useful engineering properties. LALR(1) grammars do, and you lose a few things for those properties, which you would expect.

      (And this is in response to people hanging on to the word “context-free”, sort of as a shibboleth like Turing incompleteness).

      Likewise, regular languages do have useful engineering properties (“free lookahead”, soundness and completeness, linear time, constant space). Perl-style “regexes” less so.

      http://www.oilshell.org/blog/2020/07/eggex-theory.html


      edit: Although I should be careful to say that this is about code and computation. If you have data like HTML, then I very much believe in the principle of least power:

      https://www.w3.org/2001/tag/doc/leastPower-2006-01-23.html

      https://blog.codinghorror.com/the-principle-of-least-power/

      1. 1

        Can you elaborate on the Perl bit? I have never used it

        1. 1

          Sure, the short answer is that regexes and regular languages are different things, but most people don’t think of them as different. And it matters when writing code.

          Scroll down to the table here.

          http://www.oilshell.org/share/05-31-pres.html

          I need to put this on the blog… been meaning to do that for a couple years :-/

          1. 1

            Oh I was more asking about why you prefixed it with Perl, I thought maybe Perl added certain modifications? Or did it actually popularize the notion of using regexes in current technical context in the first place?

            1. 1

              Yes, Perl is responsible for the enhancements which I would call “regex”.

              Historically, libc, awk, and grep/egrep used “regular languages”. Perl added constructs which makes them “regexes”. They look similar in terms of syntax, but are executed with an exponential backtracking algorithm rather than automata.

              This is bad property for engineering. Elaborated on in my blog post and comments.

              Perl-style regexes made their way into Python, Java, and many other languages, including by way of the PCRE engine, etc.


              I would say Unix popularized “regular languages” with ed/sed/grep/awk, and Perl popularized the “regex” variant.

              Most people are confused by this difference, but that’s why I use 2 different names for them. See the table for more ways to think about the difference.

      2. 2

        I think this post is about social problems and the subject’s not really turing (in)completeness.

        Programs that take a million years to finish technically terminate, but probably can’t be run on an actual computer.

        This is something many people say and it misses the point because the program might be valuable even if it was too complex to run.

        The whole point of type theory is to disable turing completeness in order to ensure you cannot prove contradictions.

        1. 1

          For most of the domains I’ve seen Turing incompleteness raised, a runtime of seconds would be desirable. Turing incompleteness doesn’t help at all.

          what they gain in flexibility they lose in analysability.

          Indeed, it’s often about analysis of some configuration or optimizing a generated computational model. If that’s necessary, then you definition language doesn’t need to be Turing incomplete, just the result.

          A nice example of this is Applicative parser combinators in Haskell. Parsers written with them can be heavily optimized, but only if the program that defines them could have been written in a Turing incomplete language. There is no way to decide this, but a good heuristic is that optimization finishes by the time you get impatient, which is probably the real thing you cared about when you started thinking about Turing incompleteness.