1. 43
    1. 26

      I have a “no language flamewars” rule but I’d like to make an exception because this reads a lot like a similar “defense” I wrote about C++ way, way back (2007-ish?) and which I have repented in the meantime. Not due to more exposure to language theory but due to more practical exposure to large codebases, in C++ and other languages.

      I think the author takes some points for granted, and they’re not universally correct. Some of these include:

      1. That language complexity and application complexity are not just separable, but that more of the former means less of the latter.

      To illustrate a counterpoint (“show the door before showing the key”) with a problem that I’ve been banging my head against for weeks: safely sharing data between IRQ and non-IRQ contexts in Rust is… not fun. (see https://github.com/rust-embedded/wg/issues/294 or https://github.com/rust-embedded/not-yet-awesome-embedded-rust#sharing-data-with-interrupts for some background discussion). Even the best attempts at doing it idiomatically have been absolutely horrifying so far. Debugging any code that uses them is mind-boggling, and exposes you to virtually every nuance of the underlying ownership and concurrency model of the language, many (most?) of which either don’t apply at all, or don’t quite apply the way the underlying model thinks they might. Most of this complexity stems from the fact that you’re not just building abstractions on top of the ones that the hardware provides, you’re also building on top of (or reconciling them with) the ones that the language provides, and the other abstractions built in your code. You’re effectively building on three mutually-incompatible foundations. Kind of like building a bridge on pontoons that are themselves floating in large tanks placed on other pontoons on a river.

      More generally: except for the most trivial of bugs, which stem only from oversights (forgot to call an init function, forgot to update some piece of state, whatever), most bugs are a mismatch between your understanding of what the code does, and what it actually does. Understanding the language, in all of its nuances, is a pre-condition to figuring out the latter. You rarely get to reconcile the two without understanding the language, except by sheer luck.

      1. That you can build incrementally more powerful languages only by incrementally adding more features

      There’s a tongue-in-cheek law that I’m very fond of – Mo’s Law of Evolutionary Development – which says that you can’t get to the moon by climbing successively taller trees.

      A hypothetical language like the one the author considers – exactly like Python, but without classes – would obviously suck because a lot of the code that is currently very concise would get pretty verbose. Presumably, that’s why they added classes to Python in the first place. However, that doesn’t mean a better abstraction mechanism, which could, for example, unify classes, dictionaries, and enums (currently provided through a standard library feature) wouldn’t have been possible. That would get you the same concise code, only with less complexity.

      It’s probably not the best example – maybe someone who’s more familiar with Python’s growing pains could figure out a better one – but I think it carries the point across: not all language features are created equal. Some of them are really good building blocks for higher-level abstractions, and you can express many things with them. Others not so much. Having more of them doesn’t necessarily make a language better at expressing things than another one.

      Edit: Or, to put it another way, there is such a thing as an “evolutionary leap”. One sufficiently powerful mechanism can make several less powerful mechanisms obsolete. Templates and template functions, for example, made a whole class of C hacks (dispatch tables, void pointer magic) completely unnecessary.

      1. That language complexity is equally distributed among language users.

      Many years ago, when C++ was more or less where Rust will be in a few years, you’d hear things like “oh yeah, we’re a C++ shop, but we don’t use multiple inheritance/exceptions/whatever”. Because of the sheer complexity of the language, very few people really knew, like, all of it, and most of them were either the ones writing compilers, or language evangelists who didn’t have to live with boring code for more than a consulting contract’s length. This led to the development of all sorts of “local dialects” that made integrating third-party code a nightmare. This is, I think, one of the reasons why Boost was so successful. For a long time, there were a lot of things in Boost that were much better implemented elsewhere. But getting these other implementations to work together – or, actually, just getting them to work with your compiler and compiler flags – was really unpleasant.

      1. That there are sufficiently few abstractions in the world of software that you can probably handle them all at the language level

      I don’t have proof for this, it’s just an opinion. IMHO there are so many abstractions out there, many of them specific to all sorts of problems, that the chances of a language ever encompassing a sufficiently large set of them in a practical manner are close to zero. More concisely: I think that hoping to solve all programming problems by providing a specific, highly expressive abstraction for each of them is about as productive as hoping to solve them by reducing them all to lambda calculus.

      1. 7

        Interesting response, thank you. I feel like neither this nor the article have a tone that is so absolutist or fanatical that there is a risk of a flamewar. I am glad I read both and feel I understand more about this topic now.

        It seems to me after reading this that the best solution for language complexity is exactly what we are all doing already:

        Design some languages, observe what patterns arise, extend those languages to simplify and codify those patterns, repeat until the language becomes too clunky and over-complicated. Then create new languages that use the lessons learned in the previous generation, but avoid the pitfalls. Patterns that are successful and widely used will slowly permeate every language removing the need to learn them when changing languages (apart from syntax differences).

        As long as there is a rich enough ecosystem everyone can choose the level of complexity they need/want, and the average expressiveness to complexity ratio for all languages should increase over time.

        1. 9

          Design some languages, observe what patterns arise, extend those languages to simplify and codify those patterns, repeat until the language becomes too clunky and over-complicated. Then create new languages that use the lessons learned in the previous generation, but avoid the pitfalls. Patterns that are successful and widely used will slowly permeate every language removing the need to learn them when changing languages (apart from syntax differences).

          I like the sound of this but it seems to rely on the unstated assumption that good design plays a large part in language adoption, which I believe is disproved by looking at … gestures at “the industry”

          But you’re absolutely right that learning to identify emergent patterns is probably the most important skill in designing a language. The problem with the article IMO is that it says “being complicated is OK” which IMO is a major oversimplification. The right conclusion to draw is that you have a limited complexity budget, and you need to spend it wisely. Refusing to add any complexity at all is a mistake just as adding complexity on redundant features is a mistake. In the end it’s kind of a rehash of the “accidental vs essential complexity” discussion in Out of the Tarpit.

          1. 2

            seems to rely on the unstated assumption that good design plays a large part in language adoption, which I believe is disproved

            I agree, and xigoi below made much the same point. But I believe that it is unproductive to try to dictate to others what language they should use. Sure if they would listen it would probably make their lives easier, but you can’t tell people they are wrong it simply does not help. What this system at least offers is that those with an open mind and a willingness to try new things at least have the opportunity to get better tools regularly. The industry does follow along eventually, after all most of the world isn’t using the languages from the 80s anymore, but if you are spending millions developing commercial software it is not practical switch languages frequently so some lag is to be expected.

        2. 3

          Oh, yeah, I don’t think this is inflamatory in any way :). I just… I usually prefer to stay away from discussions on the relative merits of languages and various tools. They’re generally not very productive.

        3. 3

          The only problem with this is the part where nobody will use the better languages because “there’s nothing wrong with ${old language}”.

          1. 2

            It’s really hard to explain to people that a language is better because it can’t do something. When Java was introduced, people complained that it couldn’t do pointer arithmetic yet the lack of pointer operations that can violate type safety is a key benefit of languages like Java. It’s easy to explain that language X is better than language Y because X has a feature that Y doesn’t. It’s much harder to explain that language Y is better because it lacks a feature that X has and this feature makes it easier to introduce bugs / harder to reason locally about the effects of code.

          2. 2

            That reminds me of a former colleague of mine who had been freelancing for a long time with PHP and didn’t understand my fascination with other languages. He claimed PHP was the best language and would go on lyrically about how great its array functionality is. When asked what languages he knew… PHP. Oh, and Pascal from back in school.

          3. 1

            As someone who has been pitching Rust for years, I don’t fear this argument at all. That’s an “that’s all I have left” argument.

            Now there’s multiple cases: 1) there’s something wrong with the old language, they just don’t see it yet. 2) there’s something wrong with the old language, but they’ve made the calculation of how much the new language would fix 3) the new language doesn’t meet their needs 4) they just need to hold a line and are not willing to switch anyways and not give any thought to the problem.

            1), 2), 3) can change very fast - the key here is taking that answer at face value, they will probably come back to you. 4) you don’t want to work with them anyways.

      2. 3

        Some of them are really good building blocks for higher-level abstractions, and you can express many things with them. Others not so much. Having more of them doesn’t necessarily make a language better at expressing things than another one.

        This reminds me of Lua’s “mechanisms instead of policies” principle.

    2. 8

      This is so weird because I agree with the general concept but I think it would be a much better article with a different example. Python without classes would legitimately be a much better language; encapsulation is so important that it needs to be available in a way that’s not coupled to inheritance and polymorphism.

      The article also ignores the fact that many of these features can be added by a 3rd-party library in a well-designed language, see Lua and the many 3rd-party class systems available for it as opt-in features, or Clojure’s pattern matching and CSP/goroutine libraries.

      1. 6

        It is almost a rite of passage for Lua aficionados to write their own OO class system. I’ve seen so many of them.

      2. 3

        Python without classes

        I don’t think you could write such a language while retaining the features which make python great. Python is one of the most object-oriented languages there is. Unlike e.g. Java, everything is an object. Everything can be passed around, introspected, and modified*. Without this uniformity, the language loses most of its power. Without classes you would have a javascript-like prototype system (which admittedly is not too far from what exists now), but everything would be done with… closures? Of course lua does the “everything is a dict” so maybe that’s what you’d go with. But I think saying it would “legitimately be a much better language” is missing the point.

        * yes, types without a dictoffset can’t be modified directly, but you can still extend them like anything else

        1. 3

          everything would be done with… closures?

          Yes, absolutely. If you changed Python so it had more of an emphasis on closures it would be a big improvement; it would be a lot more like Lua.

      3. 2

        I actually used Lua to explain Python classes on Quora, almost 10 years ago.

    3. 5

      I feel like pretty much every “design pattern” is an example of an “emergent language feature”, as the author named them. Which is what I dislike about Java-style OOP.

      1. 8

        Design patterns are descriptive, not prescriptive. Whenever objects are composed in a certain way, including composition of functions, then the corresponding design pattern describes the behavior of the composition. They don’t just emerge from languages you don’t like, but from any Turing-complete language.

        1. 4

          Whenever there is a pattern in your code, you should abstract it away. Some languages will allow you to do that, some won’t.

          1. 9

            Expressive languages still have patterns, people just refuse to call them patterns because they think expressive languages don’t have patterns.

            1. 2

              Can you give an example?

              1. 3

                If nothing else, Nim’s metaprogramming tends to have certain patterns, like dispatching on the kind of an AST node.

                Also, XiDoc uses when isMainModule and not defined(js):, a very common pattern in Nim programs that want to both expose a library and an executable from the same file. Just because the pattern isn’t obnoxious to deal with doesn’t mean it isn’t a pattern, a thing repeated where needed. Janet solves that particular conundrum differently, and Nim’s is a riff on a similar pattern in Python.

                It is also worth calling out that many patterns are as much about the the way data is arranged as they are are about code being arranged.

                And one could argue that convention-based programming is all about fitting into a prescribed pattern.

                That being said, making patterns in code less obnoxious to deal with is nearly always pleasant, IMO.

              2. 1

                I’ll see if I can come up with one, though it might be tricky because I don’t think we have any languages in common

                1. 2

                  Just use pseudocode.

                  1. 1

                    I don’t see how I could. Patterns are found by examining how people use languages in the wild, and there’s no pseudocode in the wild.

              3. 1

                Ruby is an ultra-expressive language and very strong on using patterns like attr_accessor and similar to manage that complexity. Most of those patterns end up as libraries, but their use is as schematic as using patterns in other languages.

                1. 2

                  I wouldn’t call attr_accessor a pattern. It’s just a method. That’s like calling print a method.

                  1. 1

                    It’s the implementation of the “Accessor Pattern” in a method to encourage its use. Ruby also uses iteration as a general pattern over loops, and the method is called “each”. It uses conversion over casts, implemented through “to_” methods. Ultimately, all pattern use boils down to some form of using language features, and methods are just the most common construct in Ruby.

                    Ruby is very brief in it’s pattern use, but there’s a ton of them to know to program it competently.

                    1. 2

                      Well, that’s exactly my point. The pattern is abstracted away into a method, so there’s no repetitive code.

                      1. 1

                        Patterns are not about repetitive code. That may be related, but many patterns are around common structure and names.

                        Related discussion: attr_accessor :foo, :bar, :batz or three lines of attr_accessor :foo...? If DRY is your only principle, the first, but there’s a strong school that says the second is preferable.(*)

                        Java Code is repetitive, but strongly pattern-based.

                        (*) which is why i prefer Single Source of Truth over DRY. https://en.wikipedia.org/wiki/Single_source_of_truth

                        1. 3

                          In my opinion, what distinguishes a pattern from normal code is that you have to recognize it as a pattern instead of taking the code at face value. For example, if you see the following Java code:

                          public class Vector2 {
                              private double x;
                              public double getX() {
                                  return x;
                              }
                              public void setX(double newX) {
                                  x = newX;
                              }
                              private double y;
                              public double getY() {
                                  return y;
                              }
                              public void setY(double newY) {
                                  y = newY;
                              }
                          }
                          

                          If you just read the code without recognizing that it’s “just” two properties with getters and setters, you’ll have to think about what it does in the larger picture. In contrast, the Ruby version

                          class Vector2
                            attr_accessor :x, :y
                          end
                          

                          makes the intention clear.

                    2. 1

                      IMHO, to_xxx is an anti-pattern. Making one type out of another should be a constructor on the target type. to_xxx in Java is an artefact of Java types being inextensible.

                      1. 1

                        How far would you take that? “Each class which wants to support serialization to string should add a constructor to the string class” seems a little extreme to me.

              4. 1

                Clojure (with-open), Common Lisp (with-open-file, with-open-stream) and Ruby (File.open have the with-foo pattern for scoped resource usage.

          2. 4

            One of the frustrating things about design patterns in software is that so much of the original intent and thought behind “patterns” was lost in the telephone game of people talking about software engineering over the years. Christopher Alexander and the Gang of Four would both tell you that if you can abstract it away, it’s—by definition—not a pattern.

            The key thing that separates patterns from data structures and algorithms is that patterns are slightly different every time they are manifested. A pattern is a guideline for how one could structure a solution to a problem, but not a solution itself. Taking a pattern and applying it still requires human judgement and understanding of the specific context in which it’s applied to know what to implement.

          3. 2

            Abstraction is not factorization. Yes, whenever there is a repetition in code, we should factor it out.

            1. 1

              Nah. That repetition is very likely to be incidental, and factoring it out has the effect of reifying an abstraction that is unlikely to stand the test of time.

          4. 1

            This sentence implies definitions of “pattern” and “abstraction” that aren’t familiar to me. Can you say more?

            1. 2

              A pattern is something that is repetitive, either directly or on a more abstract level. If you have a lot of the same code, it violates the DRY principle and you should extract it into a procedure/function/method. However, this won’t always be possible when the repetition is more nuanced. In this case, languages like Java will simply say “fuck you” and leave you to either write a lot of repetitive code, or even use a code generator to write it automatically, which produces heaps of code that is hard to navigate through and find the true substance. Meanwhile, a more powerful language will offer features like templates or macros that can extract the repetition in the same way that procedures can extract simple repetition.

              1. 1

                Thanks for the response. I think you’re using a pretty esoteric definition of “pattern”. Also, DRY isn’t really about literal repetition of code, it’s about sources of truth in the information sense. It’s often far better to leave repetition alone than to try and abstract it away!

        2. 2

          Huh! I’ve never heard this perspective before. I’ve always understood design patterns as explicitly proscriptive: they originate from repetition in practice, yes, true — but they exist as well-defined things, serving as a kind of shared domain language for practitioners. I’ve always liked the notion that design patterns are “cages for complexity” — things to actively apply to problems to keep them tractable.

    4. 5

      The complexity of Rust’s ownership model is not added to an otherwise simple program, it is merely the compiler being extremely pedantic about your code obeying rules it had to obey anyway.

      Is this true?

      What the author calls features are I think actually better understood as models. Functions, structs, the borrow checker, are all inventions, reified via keywords and constraints and semantics, in the hope that they make programming easier or better or whatever. No feature is just objectively good, right? Goodness is just how well the feature has stood the test of time. And good languages — systems composed of many interacting models —aren’t just a pile of good features. Each feature interacts with the others; adding a capability can easily deliver net negative value if it doesn’t compose well with the rest of the system. Good languages are designed holistically!

      1. 2

        I take a different perspective: In many modern architectures, sharing is reduced as much as possible and many modern programs are effectively constructed out of single-owned values, particularly HTTP services with a shared-nothing architecture. This is only true at the outer application level, of course frameworks have shared internal state (such a thread pools, etc.). But e.g. garbage collected languages make it hard to enforce the unique ownership of a value (because adding a reference is hard to trace). This is for example the thinking behind such efforts like JCoBox, that tried to make sure that e.g. values passed between actors don’t reference the memory space of the sending actor, unless immutable. https://softech.informatik.uni-kl.de/homepage/de/software/JCoBox/

        Rusts strong-point are systems where full and accurate passing of ownership is a useful property. I don’t agree with “Rusts added complexity”. Rust comes from a wholly different direction, which needs people to rewire their brain. But it’s also a complex language that needs deep understanding of it’s properties to exploit its features.

        1. 3

          Rusts strong-point are systems where full and accurate passing of ownership is a useful property.

          Sure! That’s one of the models baked in to Rust’s grammar, or semantics, or something like that. Very often it’s a useful and productive model! But I think not always. Maybe not even most of the time.

          1. 2

            I disagree. Answering the question “Which component is responsible for this piece of data?” is an extremely common source of bugs.

            Note that Rust is a flavour of that model - currently the only one that has widespread adoption and can somewhat be seen as the most aggressive implementation of it. But other languages are growing ownership models as well.

      2. 2

        To a large extent it is true. The problem of ownership (when to call free()) is fundamental. GC is a one way of tackling it (and typically only for the heap, not other resources). If you don’t have a GC, or infinite memory, you have to solve it some other way. Tracking the ownership manually without compiler’s help gets difficult once the program outgrows what you can fit in your head at once.

        There are multiple ways to model it. I used to model the problem in my head as “does it leak or crash?”, and in retrospective that was a terrible way to frame it. I feel this bad perspective was complicating it like orbits in a geocentric model. Formalizing the problem as moves and borrows adds clarity to what’s happening. Changes informal practices into types.

        However, Rust’s borrowing model can’t express all types of ownership. Most notably, it can’t reason about circular (self-referential) structures. When you hit a case that doesn’t fit Rust’s model, then you either need unsafe or some runtime check, and that feels disappointing compared to the level of safety and performance Rust usually gives.

        1. 1

          However, Rust’s borrowing model can’t express all types of ownership. Most notably, it can’t reason about circular (self-referential) structures. When you hit a case that doesn’t fit Rust’s model, then you either need unsafe or some runtime check, and that feels disappointing compared to the level of safety and performance Rust usually gives.

          The trick here is that Rust - on a type system level - models logical ownership. Box<T> is a box owning T on the heap, with a raw pointer behind it (that may be fat or not). But that doesn’t matter as long as it’s a private detail. So the criticism that Rust can’t implement self-referential structs without unsafe is minor, as long as users can rely on the internals not leaking. For that reason, Rust is strong on privacy (btw. the same approach that Ada takes on that perspective).

          Rust is built for safe composition of large programs, while relying on the correct implementation of the components.

      3. 2

        I think it’s half-true. Some resources have an ownership model regardless of the language, like what files are open/writeable or a database connection or something. Rust goes further and applies that ownership model to all shared state. Withoutboats has an excellent post exploring what if Rust still had the borrow checker, but without treating all heap memory as a resource: https://without.boats/blog/revisiting-a-smaller-rust/

    5. 3

      That’s what it says in big bold letters near the top of the Zig language web page. Zig is a recent experiment in cutting away all this derided complexity which plagues modern programming languages. Closures, function traits, operator overloading - programming is hard enough to begin with, why can’t we at least program in a simple language, without all that crap?

      That’s true, but not in the same way as, say, Go or a certain popular blogger’s secret WIP language.

      Regarding closures, it’s pretty difficult to have closures in a language that’s intended to be as low-level as Zig is. More so when that language aims to not have implicit allocations.

      Regarding operator overloading, this is a primary goal of Zig to not have hidden control flow via macros, operator overloading, exceptions, etc, and not so much as a general grudge to “complexity”. I quote:

      If Zig code doesn’t look like it’s jumping away to call a function, then it isn’t. This means you can be sure that the following code calls only foo() and then bar(), and this is guaranteed without needing to know the types of anything:

      var a = b + c.d;
      foo();
      bar();
      

      Examples of hidden control flow:

      • D has @property functions, which are methods that you call with what looks like field access, so in the above example, c.d might call a function.
      • C++, D, and Rust have operator overloading, so the + operator might call a function.
      • C++, D, and Go have throw/catch exceptions, so foo() might throw an exception, and prevent bar() from being called. (Of course, even in Zig foo() could deadlock and prevent bar() from being called, but that can happen in any Turing-complete language.) The purpose of this design decision is to improve readability.

      Regarding function traits… well, I’m afraid I have no idea of what those are. A quick Google search didn’t turn up any English I could parse at 10:30 PM.

      1. 3

        What are operators, if not functions with different syntax?

        1. 3

          The point is that you can’t define or redefine operators in Zig, so you don’t have to check what they are doing: it’s written in the manual.

      2. 1

        Nit: Go does not have exceptions.

    6. 2

      And so I assumed I just didn’t get classes.

      I remember feeling. Spent a couple years in college felling this way.

      At this point of my career, I don’t see the added complexity of classes as a problem.