1. 23
    1. 5

      Wonderful article, I think anyone who wants to be a better software engineer should read it. I was recently considering how constraints affect complexity and developer velocity in programming languages, and my findings resonate a lot with what’s in this article.

      My favorite part:

      Constraints can break modularity … punch holes in abstractions

      This is the most damaging thing a constraint can do, and it’s often incredibly subtle and hard to notice unless you’re the one writing it. A good engineer will try (or at least consider) the simplest and most decoupled solution first. But then if they discover a constraint that it breaks / doesn’t uphold, the engineer must unfortunately diverge from the simple solution to satisfy those constraints.

      It gets even more insidious: the more one works with a certain constraint, the more often they default to the more complex solution that satisfies it. It’s harder to spot when just reading the code, because we don’t have the original, simplest design handy to compare to.

      It happens particularly often with infectious constraints like async/await, borrow checking, and often even with static typing. Luckily, there are solutions: Go’s goroutines and JVM’s Loom successfully decouple async/await’s constraints out of the code, GC (and RC) decouple memory safety concerns out of the code, and interfaces resolve this for static typing. These sometimes have a run-time cost, but it’s often worth it to maintain a codebase’s simplicity: upholding developer velocity is often more important than losing a little run-time speed, especially if not in the hot path.

      I would love to hear anyone’s thoughts or experiences in reducing the impact of constraints, or on how to isolate constraints so their resulting complexity doesn’t infect neighboring areas of the codebase.

      Great article!

    2. 5

      Very interesting. I held a talk with essentially the opposite message :).

      The TL;DW of the talk is: constraints can actually bring about new and better abstractions. Adding constraints can enforce standardization, which in turn enables new abstractions to be created by being able to make certain assumptions. For example: the Rust borrow checker enforces constraints which leads to stronger guarantees, enabling race-free concurrency (modulo unsafe).

      I don’t disagree with the article, so I’ll need to merge these two perspectives.

      1. 4

        I skimmed the talk, and from what I can tell you’re talking about constraints that are “limits on your expressiveness”, while talking about constraints that are “numerical goals of the system”. It’s the same word, with two very different meanings, which we both use in software.

        ie, the Rust borrow checker enforces constraints, which lead to race-free concurrency, which we need to use to satisfy the performance constraints of the program.

        1. 2

          Yes, that’s right. I’m curious though if the line is always that clear, or if there are fuzzy things happening at the boundary? If you generalize performance constraints into “non-functional constraints”, it’s easy to imagine self-imposed non-functional constraints that serve to reduce complexity.

      2. 3

        Thank you for this talk. As an everyday developer, I subscribe to this viewpoint both personally and professionally. I learnt this from a podcast episode Constraints Liberate with Mark Seemann.

        1. 2

          Thank you, it’s great to hear it resonates :) hopefully it wasn’t too abstract (pun not intended), it was quite challenging to make practical and concrete.

      3. 2

        Two valid perspectives, to be sure. I think one of the distinguishing factors is how much a constraint helps with inherent complexity and how much it adds unnecessary artificial complexity.

        The borrow checker is a great example of this. It surfaces (and enables us to better handle) the complexity inherent to zero-cost memory safety and zero-cost fearless concurrency. However, its core mechanism (eliminating shared mutability) also adds some artificial complexity, which can be felt when e.g. trying to implement an observer pattern within the borrow checker (hint: you can’t).

        For some domains, the zero-cost aspect is worth the artificial complexity. For other domains, something like Pony works better, and the borrow checker’s constraints’ artificial complexity becomes a net negative.

    3. 3

      I think this is generally true, but sometimes the problem is just that it’s hard to build something abstract. You start thinking you’re making a paperclip factory. Then it turns out they want sewing needles too. So, you take the paperclip factory and change it into something that basically works for both, but is way more complicated than it would be if you had started from scratch or if you had really thought through a thorough redesign. You can chalk this up to the “constraint” of deadlines, but I think it’s hard to limit it to just that.

      1. 2

        I think that leads to complexity, but leads to complexity in a different way than physical constraints do. It would be a separate topic in the “taxonomy of complexity” I gesture vaguely at in this post.

    4. 2

      I see two different kinds of constraints, which have opposite effects with respect to complexity:

      1. The kind the article mentions, where you are essentially “raising the bar” on what you have to build, and more complexity is required to solve that harder problem.
      2. Constraints introduced for the sole purpose of making the software simpler or easier to reason about. I usually call these “invariants”. Things like “code never mutates a variable” or “all side-effectful dependencies must be explicitly passed in, or kept in one part of code”, etc. Such invariants are typically intended to make local reasoning easier, or to support other kinds of reliable mental models. In short, they are tools for managing complexity.