1. 47
  1. 12

    I wholeheartedly agree with this philosophy. I don’t think of code as an artifact to be preserved, and the idea of ‘lines spent’ really resonates with me.

    However, I find that –paradoxically– this is a big reason why codebases tend towards unmaintainability as time goes on. Good programmers will try to write code that is easy to change and delete, while bad programmers will come up with schemes that are hard to either change or remove. So in the long run, the solid and heavy bits are all that’s left. Especially that there are perverse incentives at play, where programmers who ‘own’ this type of code gain job security.

    Unless of course there is some kind of counter force acting on the codebase: for example, the need to keep a system within some performance bounds, or LOC count (I wish!).

    1. 5

      The original David Parnas paper on extensible code is titled “Designing Software for Ease of Extension and Contraction.”

      Extension means adding cases, contraction means deleting them.

      The contraction bit was always there.

      Extension and contraction go hand in hand; they’re both consequences of well designed modularity.

      I strongly recommend reading that Parnas paper, as well as that on modularity, if by any bad luck you haven’t yet.

      1. 1

        Thank you for the paper recommendation. The modularity paper seems a lot more often-referenced than the one you bring up.

        1. 1

          I like the paper a lot. A key insight is to specify the “minimal requirement” and build up from there. If every software project has two requirement set, one being “minimal”, another being “all plugins installed, full bloated state”, it will be a lot easier to keep a modular architecture. The product designer, instead of programmer, should be responsible for specifying how these two requirement set can be met in the same time. The solution of modularity problem might not in the hands of programmer, but in the product designing side.

        2. 3

          This article reminds me of something I was thinking about the other day. The basic operations of programming are:

          1. Create new code.
          2. Refactor existing code.
          3. Remove unused code.

          Out of these, refactoring is by far the most difficult. Is there an approach that can leverage this observation to simplify development? Instead of refactoring existing deeper functions, prefer to create new functions and point higher functions to them?

          I wasn’t convinced there was any useful insight to be had, and this article is a bit too in the weeds, but there might be something there.

          1. 6

            I’m working on an old, very OOP codebase that never stopped growing. I feel most of the time I’m “defactoring” rather than refactoring. Refactoring is what I do to understand clunky code, but this codebase is generally well written and it’s simple business logic anyways. However, it’s very hard to understand what can be changed or removed, and how to do it.

            In my cycle “refactor existing code” comes first always. New code should only get a bit of cleanup together with the newly unused code. You can’t refactor effectively new code because you don’t know what for.

            1. 2

              What do you mean by “defactoring”?

              1. 7

                I can’t speak for theblacklounge, but I interpreted that as “removing unneeded abstractions.”

                One observation I had the other day is our industry has a cognitive bias about abstractions. If you ask a developer to solve a problem on a whiteboard, they’ll decompose it into a series of boxes and arrows, and when coding it, continue to break down pieces into smaller abstractions. When a new developer inherits this code, the first thing they’ll want to do is add some capability that requires plumbing across many abstractions, and will become frustrated about layers that are a maintenance cost. But if you tried telling the initial author that breaking the problem down into small parts is a bad thing to do, they’ll tell you that grouping everything together is terrible architecture. How we judge architecture changes depending on whether it’s ours or somebody else’s.

                The biggest challenge I feel like I’m facing as a developer is to firstly predict the future, then build abstractions that seem like they’ll be useful in some hypothetical future, and my predictions aren’t always right. Future readers of my code will often see these things as pointless, even when tasked with implementing something that I predicted, because they don’t have the same mental model. Having none of these creates code that will have a short lifetime, having too many creates something unmaintainable, but it means the developer is always guessing about how confident they are about future predictions when deciding what abstractions to include.

          2. 1

            Could someone help me understand the “write boilerplate” section? I don’t really understand what they’re arguing for there; would someone fancy explaining it to me in different words and with code snippets? Or is it just a preemptive strike against religious adherence to the next section’s premise (i.e. “don’t write boilerplate”)?

            1. 7

              The case study from https://matklad.github.io/2020/08/15/concrete-abstraction.html is a great example of this.

              TL;DR: in rust-analyzer, we need to convert our internal data structures to LSP wire format, and that’s a lot of conversion. Originally, I tried to dry that code by introducing a generic “convertible” abstraction. That was a mistake: replacing that with just boilerplate somewhat repetitive code to do conversions manually in the simplest possible way reduced complexity a lot (and actually made the code shorter).

              1. 1

                I like your article and would mostly agree.

                Small point: I think Collection ends up being pretty useful for code objects that somehow “accept” a series of things. Perhaps they maintain a set of objects or they act on each object coming in. Then by defining a method as (java syntax):

                accept(Collection<T> collection) {
                    for (T t : collection) {

                I don’t have to decide ahead of time what my caller is going to use. It’s true that more often than not you only have a concrete type coming into the method, but I don’t always know what that will be prior to noodling on things for awhile. I might use a list or a set (or less frequently a queue). Having the habit of defining a method as Collection up front saves some time.

                1. 1

                  This doesn’t need Collection abstraction, only Iterable/Iterator are required here. In Rust, that would be fun accept(items: impl IntoIterator<Item=T>).

                  1. 1

                    Ah, of course you’re right.

                    I forgot, because I don’t see Iterable much explicitly in Java, and it’s been a year since I wrote Rust.

                2. 1

                  Not going for an abstraction often allows a for more specific interface. A monad in Haskell is a thing with >>=. Which isn’t telling much. Languages like Rust and OCaml can’t express a general monad, but they still have concrete monads. The >>= is called and_then for futures and flat_map for lists. These names are more specific than >>= and are easier to understand. The >>= is only required if you want to write code generic over type of monad itself, which happens rarely.

                  This is actually a good example, because in Haskell you do this all the time because monads are so pervasive. I think that’s true because of a few differences:

                  • Haskell has higher-kinded types, so you can actually parameterize over the type of a monad (which has to be a type constructor). It’s kinda hacky in Rust.
                  • Monads are very important in Haskell because they’re one of the core abstractions: you use them for state, IO, error handling, and a bunch of other things.
                  • The do-syntax lets you write monadic code in a vaguely imperative notation, which is useful when writing longer blocks.

                  Rust, on the other hand, has no HKTs, is an impure language so doesn’t need those abstractions (aside from error-handling, which it uses ? for), and doesn’t have do-syntax.

                  This isn’t to say that Rust would be better off with >>= or a Monad trait; I don’t think it would. But it goes to show that an abstraction that’s incredibly useful in one language can be not worth the bother in another.

              2. 1

                Once you make something a shared API, you make it harder to change.

                This resonates with me, and I think is one of the reasons the ‘rule of 3’ is so often correct