1. 36
    1. 24

      There is no easy way for us to to evaluate whether a new feature is worth increasing a language’s size for.

      While in general this is true, I’ve found a useful lens for approaching a certain type of addition. There are changes which “fill in gaps” without extending the “area” of complexity. For instance, in Fennel we had these three forms in the language:

      • for: counts numerically from a start to a finish number to loop thru side effects
      • each: uses an iterator to loop thru side effects
      • collect: uses an iterator like a list comprehension to return a table

      Imagine these laid out on a grid:

               | side-effects | comprehension
      numeric  |     for      |     ???
      iterator |     each     |   collect

      Looking at the problem this way, you can clearly see that there’s a missing feature: what if you want a comprehension that’s based on stepping numerically thru a range instead of using an iterator? (For unrelated reasons, we cannot fix this problem by adding a new iterator to the language; that’s a different story for another day.)

      So we added fcollect and even though it’s a new feature to the language, we did not expand the “surface area” of the language because the idea of numeric looping already existed and the idea of a comprehension already existed. Anyone familiar with these ideas could look at the new form and immediately understand it in its entirety.

      Being able to identify which changes fill in gaps vs extending the surface area is a very valuable perspective for a language designer IMO.

      1. 10

        I feel similarly about several Rust features.

        For example, Generic Associated Types are a huge language feature from an implementation point of view, but they fill a very obvious gap in the language. In fact, they are so natural that before their introduction many people would intuitively write the exact syntax expecting it to work.

      2. 4

        for: counts numerically from a start to a finish number to loop thru side effects each: uses an iterator to loop thru side effects

        Wouldn’t each suffice if you had a range-like function that produces an iterator/lazy sequence that goes through numbers between start and finish? That’s the way you’d do it in Clojure (and in Scheme, at least philosophically).

        collect: uses an iterator like a list comprehension to return a table

        In Clojure: doseq versus map?

        In other words, to me it looks like these 4 quadrants could be turned into 2 without loss of functionality. Perhaps performance-wise, you need the numeric row?

        1. 2

          Wouldn’t each suffice if you had a range-like function that produces an iterator/lazy sequence that goes through numbers between start and finish?

          Yes, that’s the “different story” referred to above.

          In most languages that would be the best way to solve the problem, but Fennel specifically is a compiler without its own runtime; it relies completely on compile-time features while all runtime features are delegated to the existing VM. Adding an iterator would mean adding a function, while currently all Fennel’s features are special forms and macros (which disappear after compile time).

          Adding a standard library would address this, but A) the cost of maintaining and distributing a standard library is much bigger than the cost of adding a couple basic forms and B) even if we made such a standard library, it would be redundant, because there are already a bunch of perfectly good ones that do this already.

          1. 2

            That’s a pretty cool design actually!

    2. 11

      Interesting, how while writing my article on fascinating AWK, I was thinking exactly of this problem. I think AWK is one of such languages that was lucky to stop in its development. From one side, few people treat it as real programming language, which may be seen as a pity. But from the other side due to this it’s universally available, very small and reasonably fast.

      The interesting fact, mentioned by @andyc (author of Oil Shell): AWK lacks GC and due to this it has pretty severe language restriction: you can’t return an array from a function, only scalar. Yes, this is terribly limiting. But in the aspect of the the subject being discussed this can be a good thing, since it allows to keep the implementation very simple, thus fast and portable.

      The other such language, to my knowledge, that takes minimalism and simplicity seriously is Go. Heck, look at their error handling via returning an error! Still, lots of monumental software already written (Docker, Terraform, Kubernetes, etc.) I really appreciate their approach of versioning it as 1.XX all the time, with version 2.X probably never going to happen.

      Also I think that it’s not a coincidence when really pleasant and flexible language (like Python) has not so good and inconsistent ecosystem/platform (infrastructure, tooling, dependencies, versioning, packaging). Where as “poor” language (Go) has remarkable, fast and consistent tooling/platform.

      I also think that maybe a very restrictive (but very smart) BDFL or steering committee is required to produce a “balanced” language. The community-driven design doesn’t look to produce any good (cough, PHP). In this sense I think the strategy of Jonathan Blow of developing his language Jai in-closed (only giving access to a limited group of beta-users and doing demo-streams) is really smart.

      1. 8

        The other such language, to my knowledge, that takes minimalism and simplicity seriously is Go.

        It doesn’t though? Go is neither minimal nor simple.

        Heck, look at their error handling via returning an error!

        It’s rather bad, and yet it’s attached to bespoke MRVs, with more ad-hoc behaviour on top (named return values, two differently half-assed local variable declarations).

      2. 2

        Go recently added generics, which is obviously a very large addition. There have been various proposals to “improve” if err != nil, which have all failed until now, but one might succeed someday. The most obvious thing that might change about Go soon is the addition of a standard iterator, which is in the discussion phase now. Other things on the possible horizon are a short function declaration syntax and the addition of sum types, although I don’t see either happening before iterators.

        So, Go is small-ish today, but I’m not sure if it will stay small forever. I think having generics definitely puts the foot in the door for a lot of “if that, why not this too?” features.

        1. 4

          Go recently added generics, which is obviously a very large addition.

          From an implementation’s standpoint, sure. But as a user? That’s not so clear to me. Take OCaml for instance, it has generics and global type inference, and yet even though generics are a crucial part of the language, they don’t make it that much bigger. On the contrary it enables huge simplification opportunities on the whole standard library and its documentation.

          Now sure, if you’ve never been exposed to generics, the learning curve is not trivial. But this is one of those instances where I tend to go macho gatekeeper: how can you call yourself a professional programmer if you don’t know generics? There’s some basic stuff, including generics and recursion, that I consider mandatory knowledge; anyone not proficient enough should train a bit more first.

          When I first learned that Go wouldn’t have generics from the beginning (despite having a GC, which makes generics a much easier problem than it was in C++ or Rust), I wasn’t just surprised at the choice, I was utterly dismayed by some of the rationale: that somehow people needed a simple language, and omitting this basic feature was the way to do it. I mean what kind of utter contempt is required for those ivory tower designers to think that programmers aren’t capable of handling something so basic any student can (and often do) learn it at their very first semester in college?

          Give people some credit. And if they don’t know generics yet, teach them this basic feature. As for those few who can’t learn… well those people just aren’t programmers. Let them work on something else, they’ll be happier for it. (I’m fully aware that in our current society this means firing them, and that’s its own kind of horrible. The fix for that is way off topic here.)

          I concede that adding generics after the fact is a major change, that does grow the language quite a bit. Thing is, if they didn’t botched it and instead added generics from the very start, the language would be quite a bit smaller than it is now. Backwards compatibility is a bear.

          There have been various proposals to “improve” if err != nil, […] and the addition of sum types,

          Sum types would have lessened the need for multiple return values driven error handling, especially with the right syntax sugar (see ML, Haskell, and Rust for examples). Adding these now will surely grow the language, but if they did it from the start there would have been opportunities for synergies and simplifications.

          It would also start to be a markedly different language: generics with local type inference + sum types begs for pattern matching, so now we hardly need interfaces any more (though we still need modules and namespacing of some kind), and next thing you know you have an ML-like language with a C-like syntax—probably not what they were aiming for.

    3. 4

      If you haven’t seen Guy Steele’s excellent Growing a Language conference talk, it’s worth your time.

      1. 1

        Yes, it almost feels like Steele is playing a mind trick on the audience in that talk!

    4. 4

      I think that Python and C++ are far from the Pareto optimum for complexity vs expressive power. And this is the fate of most popular languages as they accrete features over time.

      One lesson is there is a need to occasionally start from scratch and create new languages.

      One way to mitigate the problem is to design languages with a small core and an extensible syntax. This enables the implementation to be more modular. More features can be prototyped and implemented in libraries, and less code needs to be added to the core when new features are needed to support new requirements. You create libraries instead. A benefit of libraries is that you can deprecate old libraries and migrate to new libraries with a better design.

      Lisp is famous for having a small core and an extensible syntax. There’s a hoary old meme that people sometimes bring up in these discussions, called “The Curse of Lisp”. It’s claimed that Lisp failed, and that this is because of the extensible syntax (macros). I don’t believe the “Curse of Lisp” argument. The Common Lisp standard hasn’t been updated since 1994, and the Common Lisp community does not seem to have a community process for proposing and standardizing language extensions, like PEP for Python or SRFI for Scheme. With no process for standardizing new library APIs, you get fragmentation, and that’s the Lisp curse.

      1. 9

        Lisp shows you can move the line between the syntax, the language, and user code, but I don’t think it shows you can actually simplify programming by doing so.

        The great example of this is the loop macro in Common Lisp. It’s almost another mini programming language. I personally find it incredibly difficult to use, because it seems like a collection of arbitrary special-purpose sequences of tokens with no connection to anything else. And it feels like Lisp here gets away with “simple syntax” on a technicality, only because the syntax of the complex loop macro doesn’t count as the Lisp syntax.

        1. 4

          I prefer Clojure’s loop and R7RS’ alternate let syntax for that reason. It’s much simpler and gives you the same control flow (even if it’s not quite as convenient in some cases)

        2. 2

          Yeah loop feels a lot like tricksy C macros or complicated iterator chains in Rust. You can probably write it and make it really useful and succinct with some care and some work, but good heckin’ luck finding one in the wild and figuring out what exactly it does and how to modify it sensibly.

      2. 3

        I’ve been working with Scheme a bit lately. I can’t help but feel the approach to R7RS has been pretty nice, if not too slow. Small core with multiple extensions (SRFIs and the “red”, “tangerine”, etc. releases). The issue is Scheme doesn’t quite go far enough to have portable libraries or package management, but that’s not really a goal.

        It’s too bad the prefix notation is so alien to a lot of people, it’s actually a rather nice way for expressing very dynamic (almost fluid) code. My issues with Lisps are the lack of strong type systems, which I think matter a lot at scale and help immensely for navigating a codebase larger than your head.

        1. 4

          Racket has typed/racket. It’s very good. It generates contracts on the surface, for when you want to connect it to untyped code.

          1. 2

            Not a fan of typed/racket’s syntax and Racket itself is slower than I’d like. What I want is a Lisp with a strong type system that compiles to native code. Carp gets part of the way there, but it doesn’t seem like it’s quite ready for prime time yet, or maybe ever

            1. 2

              I feel you.

    5. 4

      Most programmers refactor their code as it grows to remove accidental complexity. Language designers should do the same, refactoring their language as they evolve it, but they usually don’t, because most users prioritize backward compatibility over simplicity. I’m not sure I agree with that, but it seems to be the dominant approach.

      1. 11

        The problem is that it’s really hard to do this and not break shit along the way. Programmers get grumpy when their code breaks. See Elm 0.17 to 0.18, and to a lesser extent perhaps to 0.19. There’s also the issue of, if you break someone’s code once, are you going to do it again? And again? When does it stop?

        I am semi-seriously considering something like this with my own language, Garnet. After 1.0 release, the opportunity for breaking changes would occur every X years, probably with X increasing over time. Maybe a fibonacci sequence or something; the gaps would go 2, 3, 5, 8 … years, so you always know when to expect them to happen long in advance. Somewhat inspired by Rust’s editions, in terms of “this is a known-stable language release”, but able to break backwards compat (and also being less frequent).

        1. 2

          The problem is that it’s really hard to do this and not break shit along the way.

          Agreed. Then we end up with a “perfect” language but no one using it. The next step for language designers would be to invest in tools that would help with refactoring the code as the language evolves. I remember Go did a bit of that in the early days before 1.0. But that was mostly for relatively trivial transformations.

          1. 5

            Rust does it too - they have an “edition” system, and every three years a new edition ships that can contain new, backwards-incompatible syntax.

            What differentiates this from e.g. the C++11, C++14, C++17 etc. sitatuon is that you get to mix-and-match these editions within the same project, the compiler handles it fine. Also, changes made in editions are designed in such a way that fixing the breakage in your code is easy and largely automated, so it suffices to run cargo fix --edition in nearly all cases.

            1. 4

              TBF lots of languages have some sort of evolution feature. Python has __future__ imports, Perl has feature and version pragmas, …

              I think the great success of Rust’s editions system is the eminently reliable migration tool obviously, and

              you get to mix-and-match these editions within the same project, the compiler handles it fine

              you don’t, really, the edition is a per-crate stricture, obviously you can have multiple crates in a given project, but it’s much coarser. If anything you can “mix and match” C++ a lot more. GCC actually guarantees that as long as all your objects are built with the same compiler you can link them even if they use different versions of the standard. And you can even link cross-version if the features were not considered unstable in that compiler version (so e.g. you can’t link c++17 from GCC7 and C++17 from GCC8 because C++17 support was considered unstable in GCC8).

              But I think that’s advantageous.

              An other major advantage of Rust is simply that’s it’s an extremely statically typed language, so there are lots of language improvements which can be done with middling syntax tweaks and updating the prelude, whereas adding a builtin to a dynamically typed language has the potential to break everything with limited visibility. Not being object-oriented (so largely being “early bound”, statically dispatched) and very strict visibility control also means it’s difficult for downstream to rely on implementation details.

        2. 1

          With a sufficiently expressive macro system, I think you could pull this off (relatively) easily:

          When features get removed, rather than than being axed completely, they get moved to a standard library macro, and then when source files get compiled in a new version that has removed built-in support for the feature, it automatically inserts the import into the top of the source if it’s used in it. Those macro contexts could bar feature compatibility with (from their perspective) the future, such that if you want to use new language features in a block of code using a legacy macro, you need to refactor the legacy macro away. Doing so would decrease maintenance burden substantially, because you don’t need to worry about new language features conflicting with now-sunset language features.

          I think that gives the best of both worlds: reduction of core language complexity, while not breaking source files that have been left untouched since the times of dinosaurs.

    6. 2

      I appreciate Roberto Ierusalimschy’s talk on this topic in the context of Lua:

      How much does it cost


    7. 2

      I think the key idea here is the idea of unintended misuse, from the quote “The larger a language is, the easier it is for users to misuse it without even knowing it.” C++ suffers from a vast proliferation of “foot-guns,” which is a colloquialism that I allege sometimes means the same thing: features that engender misuse. Another aspect of it is interactions between features that complicate reasoning. The classic example from C++ is the relationship between default arguments and overloaded functions. Most languages don’t have both these features. This leads us to the discussion of orthogonality—the idea that features A and B are non-interacting. “Bigness” becomes a design smell because it suggests to us that the cross product of features is getting unmanageable by humans, so there could be unpredicted interactions—the condition necessary for unintended misuse.

      But we still have “big” languages like Python for whom the main sticking point is something other than the size of the language itself—package management or performance, for Python, are usually the bigger complaints than Python’s linguistic complexity. I allege that this is because there is a certain unity of design there which is nudging the evolution of the language away from non-orthogonal features. Rust also has this. And I think this is why you see strong reactions to these languages as well—love them or hate them, they have a design ethos.

      1. 2

        I think it goes both ways – if the language is small (like C), it also encourages foot-guns. I don’t know what is the optimum middle-ground here though.

        1. 5

          I’m not a C practitioner, but my sense is that unintentional misuse of C is largely about the memory model and pointers. C++ has these same problems but doubles the surface area (because new and malloc are both present) and then increases it more by making it difficult to tell when allocations occur, and then making it difficult even to tell if you’re looking at function calls or something else thanks to operator overloading. C has a difficult computational model to master, but C++ adds quite a bit of “language” on top of a larger computational model.

          1. 7

            Someone really needs to explain the bashing on operator overloading. Function overloading doesn’t get nearly as much criticism, and it’s the exact same thing. Perhaps even a bit worse, since the dispatch is based on the types of arbitrarily many arguments.

            And by the way, it’s the absence of operator overloading that would surprise me. First, to some extent the base operators are already overloaded. Second, operators are fundamentally functions with a fancy syntax. They should enjoy the same flexibility as regular functions, thus making the language more orthogonal.

            (Now you probably don’t want to give an address (function pointer) to primitives of your language, and I know operators tend to implement primitives. That’s the best objection I can come up with right now.)

            1. 3

              I think there are two sources of objection, one named by @matklad below having to do with performance-oriented developers coming from C. The other pertains to overloading generally and is (AFAICT) based on the non-orthogonal combination of function overloading with functions permitting default arguments that makes resolution cognitively demanding even on people who like operator overloading in other languages.

              1. 1

                Yeah, in my experience Rust’s overloaded operators work pretty well, because there’s no default args or overloading of function args. If you have an operator somewhere in your program, there is exactly one function it always calls in that context, determined 100% by the type of the first argument. That’s a lot easier to reason about.

                1. 2
                  1. 1

                    Not really


                    Thanks for the correction. Still no subtyping though! I’m like 45% correct!

                2. 1

                  My impression, as someone who is about halfway through the Rust book, is that in general Rust provides abstractions but does so in a way that is unlikely to lead to unexpected performance issues. Is that your experience?

                  1. 2

                    More or less. Doing things with a potentially-expensive performance cost is generally opt-in, not the default. Creating/copying a heap object, locking/unlocking a mutex, calling something via dynamic dispatch or a function through a pointer, etc. Part of it is lang design, part of it is stdlib design.

            2. 3

              That’s the best objection I can come up with right now.

              But that’s the thing! That’s exactly what perf-sensitive people object to: needing to mentally double-check if + is an intrinsic, or a user-defined function.

              The second class of objection is to operator overloading, which also allows defining custom operators and precedence rules. That obviously increases complexity a lot.

              The third class of objections is that implementing operator overloading sometimes requires extra linguistic machinery elsewhere. C++ started with a desire to overload +, and ended up with std::reference_wrapper, to name a single example.

              1. 3

                It would be neat to have a language where the intrinsics are defined like functions, but then operators can be defined to call the intrinsics. So, if your CPU has a div+mod instruction, you can call __divmod(x, y), but to make it convenient, you can bind it to a custom operator like define /% <= __divmod; let z, rem = x /% y.

          2. 3

            I don’t think C++ is a good example in this discussion, because it’s an outlier in language design. It’s not just “big”, but also built on multiple layers of legacy features it doesn’t want any more, but can’t remove. There is a lot of redundancy and (if it wasn’t for back compat) unnecessary complexity in it. So it’s not a given that a language that isn’t small is necessarily like C++.

            Rust is relatively big and complex, but mostly orthogonal in design, and has relatively few surprising behaviors and footguns. Swift went for big and clever design with lots of implicit behaviors, but its features are not as dangerous, and apart from SwiftUI, they don’t fragment the language.

            1. 1

              On the contrary, I think that the whole point of the article is to suss out what it is about big languages that make them worrisome, and the tendency of languages to inflate over decades. C++ is a pathological case in many ways but Rust and Swift are still very young.

              1. 3

                I keep hearing this “if it keeps growing, it’ll end up like C++”, but I don’t think this has actually ever happened. I can’t think of any language that has painted itself in a corner as much as C++.

                Scheme is older than C++. Ada and Erlang are about as old as C++, and did not jump the shark. Java and C# have been growing for a long time now, expanded a lot, and still hold reasonably well. Even PHP that has a reputation for being a mess and has tough backwards compat constraints, managed to gradually move in the less messy direction.

                1. 3

                  I can’t think of any language that has painted itself in a corner as much as C++.

                  As much as C++, and survived? None that I can think of. Honorable mentions? I can think of several: Perl 5, bash/unix shell, PHP. Scala and C# keep trying to get there too, from what I can tell.

                2. 1

                  I’m not sure what I said above that engendered this response.

    8. 2

      Too large, hard to use (C++). Too small, hard to use (Turing tarpit).

      1. 6

        There’s another axis: too small, hard to optimise. Complexity has to live somewhere. There are three place something like a complex control-flow structure can live:

        • The language
        • The standard library
        • User code

        As you go down the list, it becomes harder for an implementation to optimise. Smalltalk is the extreme example of a tiny language. The entire spec fits on one piece of paper, most control flow is in the standard library. Even if statements are just messages sent to either a true or false object, which will either execute the closure passed as an argument or not. This would be painfully slow so most Smalltalk have a set of ‘primitive’ methods for these things that are not subject to dynamic dispatch, but which then end up with exciting performance cliffs (why is your custom loop construct a factor of ten slower than the standard library one?).

        Compilers like to have more information. A lot of modern compilers assume semantics of standard library functions because they are as well specified as language features but that then means you end up splitting the implementation in subtle and complex ways.

        I personally like the Smalltalk philosophy that says ‘nothing should go in the language if it can go in the standard library’. Any modern language is going to have to think hard about module versioning and supporting versions of module with different interface versions in the same program if it wants to scale to modern software engineering problems. If it solves these problems, putting most features in the standard library provides a way of introducing breaking changes without breaking old programs, but it does require a lot of care to enable optimisation. Higher-level control flow structures, for example, often come with more information about aliasing and interference than raw loops, but if they’re in the standard library and the compiler just sees a loop then they’re harder to optimise.

        1. 1

          ‘nothing should go in the language if it can go in the standard library

          That’s also kind of the philosophy of C++, in a way, and it’s why C++ has std::variant instead of something sane like proper sum types. Yikes.

          1. 1

            There’s a bit of a trade off here. You can implement variant only if you have type-unsafe unions. If you want type safety, you can’t implement variant in the library.

    9. 1

      With Python I hard opted out of keeping up with new language features after they added the walrus operator.

      Fundamentally I think it’s a lack of product management that’s normal in open source. Developers left free will add frivolous features for their own pleasure and technical satisfaction because the issues really affecting people are too hard or too unappealing to tackle.

      Python has lots of really hairy parts that are seeing some progress recently (cough pip) but could use a lot more TLC.