1. 26
  1.  

  2. 6

    Sort of.

    I think the Rule of Least Expressiveness captures it better:

    Rule of least expressiveness:

    When programming a component, the right computation model for the component is the least expressive model that results in a natural program.

    From Concepts, Techniques, and Models of Computer Programming, a great book.

    1. 5

      Wonderful article! It articulates and supports the concept extremely well.

      It’s also very relevant to the programming language I’m designing. It supports the full range of substructural types, with full type inference of required capabilities (using a type class / traits system). This means, for example, that code that does not move values on the stack(s) will not require the “Move” trait, and can thus be trivially compiled to use stack allocation.

      And linear types will be used for I/O, so that all functions remain pure, but you don’t need the complexity of Monads like in Haskell. All such capabilities/effects will be infered and tracked by the type system.

      1. 1

        And linear types will be used for I/O, so that all functions remain pure, but you don’t need the complexity of Monads like in Haskell.

        Are you referring an API like this?

        getLine : s -o (s, String)
        putLine : String -> s -o s
        
        main : s -o (s, Int)
        main = ...
        
        1. 1

          It’s a (the first?) multi-stack concatenative language, so you don’t have to explicitly thread linear values through every function call. All functions are inherently linear functions from one state of the (implicit) multi-stack to the next state of the multi-stack.

          The surface syntax will likely be simpler, but the fully explicit type for a “print line” term will be something like:

          (forall s1, s2, w WriteStr . $: s1 * Str $stdout: s2 * w -> $: s1 * Str $stdout: s2 * w )
          

          Where the identifiers between $ and : are the stack identifiers (empty for the default stack), the variable w is qualified by the WriteStr trait, and the stack variables s1 and s2 can expand to any product type.

          The stack variables make the function (multi-)stack polymorphic. If you’re interested in learning more about stack polymorphism, I highly recommend this blog post from Jon Purdy: http://evincarofautumn.blogspot.com/2012/02/why-concatenative-programming-matters.html?m=1

          In order to clone, drop, or move a value, it must possess the Clone, Drop, or Move trait, respectively. In this way, we can support the full range of substructural types, making the type system quite expressive! This makes it possible to express things like a file handle that must be opened before being written to and an open file handle that must be closed before it can be dropped.

          1. 1

            That’s a dense-looking type signature. Let me try to make sense of it:

            So ‘print line’ transitions two stacks - the default stack and a stack called stdout. ‘print line’ expects a Str on top of the default stack, and a WriteStr dictionary on the stdout stack (which I assume contains functions that can actually make a system call to print to stdout). ‘print line’ does not change the default stack or the stdout stack.

            Is that right?


            Purity for applicative languages is something like equivalence between programs f <x> <x> and let y = <x> in f y y for all f and <x> (where <x> is some source code, like 1 + 2 or print("hello")).

            What’s the equivalent in a concatenative, stack-based language? I’d guess an equivalence between programs f <x> <x> and f dup <x>. It doesn’t look like your ‘print line’ has this property. (I might be wrong about what to expect from a pure concatenative language, though.)

            The reason I bring this up is because given what you’ve told me, I don’t think that this is true of your language yet:

            And linear types will be used for I/O, so that all functions remain pure, but you don’t need the complexity of Monads like in Haskell.

            1. 1

              Is that right?

              Essentially, yes. (There’s a minor detail that the WriteStr dictionary is not actually on the stack. Rather a type that implements the WriteStr trait is on the stack. In this case, it’s effectively the same thing, though.)

              Purity for applicative languages is something like equivalence between programs f and let y = in f y y for all f and (where is some source code, like 1 + 2 or print(“hello”)).

              What’s the equivalent in a concatenative, stack-based language?

              Let’s make the example concrete. In Haskell you would have

              f (putStrLn "hello") (putStrLn "hello")
              

              is equivalent to

              let y = putStrLn "hello" in f y y
              

              In this linear concatenative language, you might correspondingly have

              ["hello" putStrLn] ["hello" putStrLn] f
              

              is equivalent to

              ["hello" putStrLn] clone f
              

              Note that terms between brackets are “quoted”. That is, they are pushed onto the stack as an unevaluated function, which can be later evaluated using apply.

              I believe this fulfills the notion of purity you’re describing, unless I’m misunderstanding something.

              Edit: To be clear, cloning is only allowed here because both "hello" and putStrLn, and thus ["hello" putStrLn] implements the Clone trait. If you tried to clone a function that closed over a non-Clone type, then it would not type check.

              Edit 2: And because it might not be clear where linearity comes in, the value on top of the $stdout stack that implements WriteStr is linear. So when you go to evaluate putStrLn, you must thread that value through the calls. Thanks to the multi-stack concatenative semantics, such value threading is completely implicit.

              This has an advantage over Haskell’s monad system in that there’s no need for monad composition and all the associated complexity (which I have a lot of trouble wrapping my head around).

              1. 2

                I believe this fulfills the notion of purity you’re describing, unless I’m misunderstanding something.

                That seems right.

                Thanks for the chat! You’ve given me a lot to think about.

        2. 1

          Neat! Is there a website?

          1. 1

            No, not yet. It will probably be a bit before it’s ready for that, but I do hope to get there as quickly as possible.

        3. 3

          All the examples were things I literally never had an issue with, so personally going for “less powerful” over “more powerful” isn’t something I got convinced in doing.

          As long as it’s statically typed and the language is my friend, not my nemesis, I’ll pick that.

          1. 4

            A dynamically typed language is more powerful than a statically typed language. You can do things in them that the statically typed language won’t allow you to do. It seems like you have indeed made the choice to restrict the power of your languages in order the gain the benefits of the more restrictive type systems of a statically typed language.

            1. 4

              A dynamically typed language is more powerful than a statically typed language. You can do things in them that the statically typed language won’t allow you to do

              Indeed, like failing at runtime due to a type mismatch like a string getting in where you expected a number.

          2. 2

            I found the MANIFEST.in example to be poor one because, frankly, the spec language itself is just bad. It struck me that it’s not a “power” problem so much as it’s a design problem. The language really has the feel that it was made for ease of implementation, hence the reliance on state that is an unreasonable burden on the user.

            1. 2

              Every increase in expressiveness brings an increased burden on all who care to understand the message.

              I beg to differ. For example, the word “history” is a supposedly an “increased burden”, since you have to learn it. But it adds to your expressiveness, and it would be much more of a burden trying to have a conversation about history without it.

              I agree that giving the users ultimate freedom may lead to creating a bad code culture, and might even lead to limitation on the language as it tries to grow. But I would argue that these problem usually happen not because there is an abundance of power, but because the design doesn’t help users harness it properly. For example, in Python there is exec, which lets you do pretty much anything you might want. Python also has support for direct memory access. But these functions are almost never used, because 1) they are not part of the language’s core design, and so are uncomfortable to use, and 2) there’s almost always an easier way to do whatever it is that you want. I’m not saying that Python is perfect, but just that it is possible to influence users, by giving them all the power, but putting some of it on a higher shelf, on purpose, so not everyone will try to reach.

              The only thing worse than a language with too much power, is a language with not enough of it.

              1. 6

                Every increase in expressiveness brings an increased burden on all who care to understand the message.

                I beg to differ. For example, the word “history” is a supposedly an “increased burden”, since you have to learn it. But it adds to your expressiveness, and it would be much more of a burden trying to have a conversation about history without it.

                In the context of Paul Philips’ talk where the quote is from, a word like that would not be “an increase in expressiveness”. The kind of expressiveness he was talking about was “grammar” more than “vocab”, so to speak. In almost any language, human or programming, you can define helpful new “vocab” (libraries for programming) without usually changing the language itself, and without putting “geometric” burdens on the listener, as Paul described it.

                To use Python as example, if I define a new class, or function, or new member of a class, and you are trying to “understand” my code, I have added a small burden on you, the human, and a small burden on any tools - a bit of extra memory will be required to store information about that addition. But suppose I add the descriptor protocol - https://docs.python.org/3/howto/descriptor.html . I’ve now added a whole new feature that you and all your tools will have to learn every time we want to work out “what does attribute x on object y refer to?”. In addition to understanding object __dict__, and class __dict__, and __getattr__, and __getattribute__, and probably others, you now have to be aware of __get__ as well, which works quite differently, and you have to (potentially) think about it every single time you do attribute lookup. For example, a static analyzer and a Python implementation etc. will most likely need new code, not just a little bit more memory to run in.

                Now, this may be worth it, and depending on the specifics it might not be too bad, but it is a burden.

                1. 0

                  You are talking about language complexity, which is different from power. For example, lisp is one of the simplest languages there is, but it’s more powerful than most languages.

                  __get__ doesn’t actually give you that much more power. It’s just a way to tidy up syntactic issues that shouldn’t have been there in the first place. (in a perfect design, there’s no reason you’ll need both __get__ and __getattr__)

                  But regarding the broader point, I agree that all features come at a cost. But it’s much more often that I want a missing feature, rather than lament the abuse of an existing one.

                  I can also see the point that beginners need a simpler language, like Go, that forces them into good habits. But I’ve also seen Go code that would have been much simpler if a little more abstraction was allowed.

                  1. 2

                    The point is that removing power removes needing to ask “What if?”

                    If Python didn’t have __getattr__, you wouldn’t have to ask ‘what if someone is doing a network call behind this assignment?’ when debugging. If your language maps to primitive recursion, you don’t have to ask ‘what if this is an infinite loop’?

                    Like static type systems, a restrictive language eliminates entire classes of errors by construction.

                    1. 1

                      Seems like you are still missing the point, though exhibiting exactly the way of thinking that is criticized in the talk. :-/

                      1. 0

                        Perfect, here’s your chance to educate me.

                        1. 1

                          Not with that attitude.

                2. 1

                  I think LISP is a counterexample to this argument? It’s maximally powerful/expressive, yet somehow also easy to manipulate. Probably because it’s functional (no side effects to worry about), and doesn’t have serious syntax (you just write the AST in S-expressions).

                  1. -1

                    All the examples were things I literally never had an issue with, so personally going for “less powerful” over “more powerful” isn’t something I got convinced in doing.

                    As long as it’s statically typed and the language is my friend, not my nemesis, I’ll pick that.

                    Stories with similar links:

                    1. We need less powerful languages via isagalaev 5 years ago | 46 points | 8 comments