1. 29
  1.  

  2. 12

    I think I’ve read this paper a half dozen times now after seeing someone or other wax lyrical about it online. And I just don’t get why people like it so much. Which means either I’m too much of an insider and this is old news, or I don’t appreciate what I don’t know.

    Part of the problem is that it seems to overstate its contributions. It isn’t really “identifying” causes of complexity. Fred Brooks pointed out essential vs accidental complexity back in 1975, and there’s been a steady stream of articulation ever since. To any Haskeller the point that complexity comes from state is old news. Relational algebra is ancient. At best OP is laying things out in a new way.


    This time I figured I’d speed up my rereading time by instead chasing down past threads. And I discovered a decent summary. This was helpful, because it helped me to focus on the ‘causes of complexity’ portion of OP.

    Causes of complexity according to OP:

    • State (difficulty of enumerating states)
    • Control (difficulty of choosing an ordering)
    • Concurrency
    • Volume
    • Duplicated/dead code, unnecessary abstraction

    Compare the list I made a couple of years ago:

    • Compatibility. Forcing ourselves to continue supporting bad ideas.
    • Vestigial features. Continuing to support stuff long after it’s been obsoleted by circumstances. Simply because it’s hard to detect obsolescence.
    • People churn. Losing institutional fingerspitzengefühl about the most elegant place to make a given change. Or knowing when some kludge has become obsolete.

    Comparing these two lists, it looks like there’s a tension between the top-down and bottom-up views of software management. In the bottom-up view people seem to think about software like physics, trying to gain insight about a system by studying the atoms and forces between atoms. You tend to divide complexity into state and order, essential and accidental. Reductionism is the occupational hazard.

    In my top-down view I tend to focus on the life cycle of software. The fact that software gets more complex over time, in a very tangible way. If we could avoid monotonically adding complexity over time, life would be much better. Regardless of how many zeroes the state count has. In this worldview, I tend to focus on the stream of changes flowing into a codebase over time, alongside the stream of changes happening to its environment. This view naturally leads me to categorize complexity based on its source. Is it coming from new feature requirements, or changes to the operating environment? How can I keep my raft slender as I skim the phase transition boundary between streams?

    The blind spot of the bottom-up view is that it tends to end up at unrealistic idealizations (spherical cows as @minimax put it in this thread). The blind spot of the top-down view is that there’s a tendency to under-estimate the complexity of even small systems. Blub. The meme of the dog saying “this is fine” while surrounded by flames.

    It seems worth keeping both sides in mind. In my experience the top-down perspective doesn’t get articulated as often, and remains under-appreciated.

    1. 5

      Here’s my take on it: https://news.ycombinator.com/item?id=15776629

      I also don’t think it’s a great paper. It’s long on ideas but short on evidence, experience, and examples. I don’t think you’re missing anything.

      1. 4

        I have a similar take to you. I think it’s one of those papers that is easy to get excited about and everyone can agree that complexity is bad and all that. But I have not seen any successful application of the ideas in there. The author’s haven’t even successfully implemented the ideas beyond a little prototype, so we don’t have any idea if what they say actually pans out.

        And to toss my unscientific hat into the ring: IME the biggest source of complexity is not programming model but people just not being disciplined about how they implement things. For example, I’m currently working in a code base where the same thing is implemented 3 times, each differently, for no obvious reason. On top of that, the same thing is some times and id, sometimes the id is a string and sometimes an int, and sometimes the string is a URL, and it’s never clear when or why. This paper is not going to help with that.

        1. 2

          If what you say is true, then the success of LAMP stacks with associated ecosystems for new people and veterans alike might make it high on “evidence, experience, and examples.” That architecture worked for all kinds of situations even with time and money limitations. Except that the specific implementation introduces lots of the complexities they’d want people to avoid. So, maybe instead the Haskell answer to a LAMP-style stack or something like that fitting their complexity-reduction ideas.

          Although the others shot it down as unrealistic, your characterization seems to open doors for ways to prove or refute their ideas with mainstream stuff done in a new way. Maybe what they themselves should’ve have done or do they/others do later.

          1. 4

            Yes, so depending on the scope of their claims, it’s either trivial and doesn’t acknowledge the state of the art, or it’s making claims without evidence.

            Appreciating LAMP is perhaps nontrivial. Google services traditionally used “NoSQL” for reasons of scaling, but the relatively recent development of Spanner makes your architecture look more like LAMP.

            But either way I don’t think that LAMP can be “proven” or “refuted” using their methodology. It’s too far removed from practice.

        2. 4

          In my top-down view I tend to focus on the life cycle of software. The fact that software gets more complex over time, in a very tangible way. If we could avoid monotonically adding complexity over time, life would be much better.

          Thanks for the interesting commentary. Some parts definitely resonated, particularly about the half-features and difficulty of knowing how and where to make the right change.

          This is only the germ of an idea, but it is perhaps novel and perhaps there is an approach by analogy with forest management. Periodic and sufficiently frequent fires keep the brush under control but don’t get so big that they threaten the major trees or cause other problems.

          Could there be a way of developing software where we don’t look at what is there and try to simplify/remove/refactor, but instead periodically open up an empty new repo and move into it the stuff we want from our working system in order to build a replacement? The big ‘trees’ of well understood core functionality are most easily moved and survive the burn, but the old crufty coupling doesn’t make the cut.

          Some gating on what would be moved would be needed. The general idea though is that only sufficiently-well-understood code would make it across to the new repo. And perhaps sufficiently reliable/congealed black boxes. It would interplay a lot with the particularly language’s module/package and testing systems.

          The cost would be periodic re-development of some features (with associated time costs and instability). The benefit would be the loss of those code areas which accrete complexity.

          1. 2

            Yes, definitely worth trying. The caveat is that it may be easy to fall into the trap of a full rewrite. There’s a lot of wisdom encoded in the dictum to avoid rewrites. So the question becomes: how would you make sure you don’t leave behind the most time consuming bugfixes you had to make in production on the old repo? Those one-liners that took a week to write?

          2. 3

            This paper was written in 2006, two years before Applicatives were introduced. The Haskell community’s understanding of how to best structure programs has been refined a lot in that time and I think you underestimate the insights of this paper even if it is only a refinement of Brooke’s ideas from 40 years ago.

            1. 1

              Thanks, I hadn’t seen that paper. What made you cite it in particular?

              1. 1

                It’s where Applicatives were introduced, as far as I know.

                1. 7

                  Can you describe the connection you’re making between Applicatives and this paper?

                  1. 1

                    I got the impression that akkartik was saying that OOTP hadn’t added much new. My claim is that the fact that Applicatives were only introduced 10 years ago shows that the bar for novelty is actually quite low.

            2. 1

              “This was helpful, because it helped me to focus on the ‘causes of complexity’ portion of OP.”

              That’s the part I read in detail. I skimmed the second half saving it for later since it was big. The first half I liked because it presented all those concepts you listed in one location in an approachable way. It seems like I learned that stuff in pieces from different sub-fields, paradigms, levels of formality, etc. Seeing it all in one paper published ahead of some of these things going into mainstream languages was impressive. Might have utility to introduce new programmers to these fundamental concepts if nothing else I thought. Understanding it doesn’t require a functional programming or math back ground.

              Ok, so that’s their requirements. Minimax mentions things like business factors. You mention changes with their motivations. I’ll add social factors which includes things that are arbitrary and random. I don’t think these necessarily refute the idea of where the technical complexity comes from. It might refute their solutions for use in the real world such as business. However, each piece of the first half is already getting adoption in better forms in the business world on a small scale. There’s also always people doing their own thing in greenfield projects trying unconventional methods. So, there’s at least two ways their thinking might be useful in some way. From there, I’d have to look at the specifics which I haven’t gotten to.

              I do thank you for the Reddit search given those are about the only discussions I’m seeing on this outside here. dylund’s comments on Forth were more interesting than this work. Seemed like they were overselling the positives while downplaying negatives, too, though.

            3. 11

              A good read, for sure. And some good ideas. But the authors only focus on technical factors, as though software were developed exclusively by programmers for their own purposes. They don’t address, for example, Conway’s Law or any other sources of complexity which don’t originate in the development process itself. They talk about formalizing requirements, but not where the requirements come from or how they got to be the way they are, or how they change over the course of development.

              It’s certainly easier to frame the issue as being about technical problems and technical solutions. And there’s certainly plenty to talk about in that frame. But technological determinism by itself usually doesn’t have much predictive or explanatory power, which is why these kind of accounts have largely been abandoned by professional historians and sociologists who study technology. Even amateur software historians (who are doing most of the work!) typically point to business, marketing, or economic factors as being decisive influences in the development of the technologies they document.

              Take your favorite “radically simple” system: say APL, or Forth, or Oberon, or Smalltalk or whatever. Step away from the shiny stuff and look at the people and organizations involved: who actually developed it, who paid for it, who used it and what they used it for. Then do the same for whatever “typically complex” web-app or C++ game or Free Software OS or government payroll system or whatever. The differences may be instructive.

              1. 7

                I’ll start. The big difference between simple systems and typically complex web apps is scale. Small codebases can do more with fewer team members. They become more likely to have better programmers. They suffer less from knowledge evaporation due to people leaving. They hire less, and so they tend to have fewer layers of organization. This keeps Conway’s Law from kicking in. The extrinsic motivational factors of money, raises and promotion don’t swamp intrinsic motivation as much.

                I’ve gained a lot of appreciation over the past decade for the difference between technical and social problems. But in this instance the best solution for the social problem seems to be a technical solution: do more with less (people, code, concerns, etc., etc.). It doesn’t matter what weird thing you have to do to keep things cosy. Maybe you decide to type a lot of parentheses. Or you stop naming your variables (Forth). Or you give up text files (Smalltalk).

                Once you have a simple system, the challenge becomes to keep the scale small over time, and in spite of adoption. I think[1] that’s where Lisp ‘failed’; of all your examples Lisp is the only one to have tasted a reasonable amount of adoption (for a short period). It reacted to it by growing a big tent. People tend to blame the fragmentation of Lisp dialects. I think that was fine. What killed Lisp was the later attempt to unify all the dialects into a single standard/community. To allow all existing Lisp programs to interoperate with each other. Without modification. Lisp is super easy to modify, why would you fear asking people to modify Lisp code?

                Perhaps the original mistake was the name Lisp itself. Different Lisp dialects can differ as greatly as imperative languages. Why use a common name when all you share is s-expressions?

                A certain amount of curmudgeonly misanthropism in a community can be a very good thing.

                [1] I’m just a young whippersnapper coming in after the fact with my half-assed pontificating, etc., etc. I don’t mean to side-track a discussion of complexity with Yet Another Flamewar About Lisp. (Though I’d appreciate any history lessons!)

                1. 5

                  We now examine a simple example FRP system. […] To keep things simple, this system operates under some restrictions:

                  1. Sales only — no rentals / lettings
                  2. People only have one home, and the owners reside at the property they are selling
                  3. Rooms are perfectly rectangular
                  4. Offer acceptance is binding (ie an accepted offer constitutes a sale)

                  This kind of toy example makes their observations on software complexity in general harder to take seriously. It reminds me of the hoary genre of “spherical cow” jokes. All of those simplifying assumptions (and no doubt plenty more unstated ones!) make their example system more or less completely useless to an actual real estate business.

                  1. 1

                    I agree. Especially on No.‘s 2-4 since they represent situations that either don’t map cleanly to a neat model or just ignore the corner cases that real systems can’t ignore. The models always need to be tested with the ugly requirements on top of the easy ones.

                2. 5

                  This is one of the best papers I’ve read in a long time. I’m gonna probably have to think on it for quite a while before I can respond to or act on it. Definitely forwarding it to some people. :)

                  1. 3

                    An alternative approach that I think I like more, at least without trying either in anger, is lots of DSLs like what Kay is doing with VPRS or whatever that place is called. Rather than have a generic language, have a language that can easily make any other language, and solve your problem in that. This paper forces a particular structure on how one writes programs which could be problematic, and I would guess falls apart under high performance situations.

                    1. 3

                      That’s one of my default options. Ive stashed away verified metaprogramming techniques in case I go with it. Inspirations were META II, Lisp’s (esp Racket), Kay, Peter Hintjen’s iMatix work that generates C, and sklogic’s tool. So, the DSL-on-powerful-core approach has proven out many times. It can also accomodate their method since DSL’s usually have less power.

                  2. 3

                    The title of the paper is “Out of the Tar Pit” not “Common Causes of Complexity”.

                    1. 2

                      Thanks - fixed.