1. 16
  1.  

  2. 4

    We hence see essential complexity as “the complexity with which the team will have to be concerned, even in the ideal world”… Note that the “have to” part of this observation is critical — if there is any possible way that the team could produce a system that the users will consider correct without having to be concerned with a given type of complexity then that complexity is not essential.

    I strongly disagree with this and take the opposite stance: there’s a lot of complexity that people think is “accidental” that’s essential. One example is security. Most users will consider an insecure system correct, plenty of times an insecure system flies under the radar long enough to be net positive, and in an ideal world there’d be no attackers. But making things secure is an essential part of complexity part of essential complexity. Another is privacy and safety. Any system that can be used for interpersonal abuse will eventually be used for abuse.

    Overall I think accidental and essential complexity are too broad as definitions and we need a finer grain of meaning,

    1. 10

      Tbf I don’t think you’re disagreeing with the essence of the point.

      You’re making the additional, separate, and correct point that what people consider “the spec” is often not a reflection of what “the spec” truly is, or should be, in a real-world production system.

      They’re saying that, given some spec which we assume is correct, then “if there is any possible way that the team could produce a system that [conforms to that spec] without having to be concerned with a given type of complexity then that complexity is not essential.”

      1. 4

        That’s fair.

        1. 4

          I’d actually defend @hwayne’s point in stronger terms (that I’m not sure he’d agree with).

          The very framing of starting from a single, complete spec and following the spec to build an implementation is misleading at best and outright dangerous at worst. Idealized specs are unknowable in the real world. So yes, what the authors say applies to some idealized spec, but it’s unknowable. Who cares?

          The best use of specs is to nail down aspects of systems. The desired behavior is then cobbled together from a combination of multiple rigorous specs, informal prose and even more informal handwaving. And this is fine! As long as everyone understands where the boundaries are between the different categories.

          (I’ve reached this conclusion partly after reading @hwayne’s https://www.hillelwayne.com/post/why-dont-people-use-formal-methods. Corrections to my understanding most appreciated.)

          1. 3

            I agree with your overall point about the messiness of building real systems, and that the idealized spec is usually unknowable, at least in part.

            That said, i think it’s still a useful conceptual tool for thinking about problems, and i think the accidental/essential complexity distinction is extremely useful for analyzing code and design, in exactly the same way that idealized physics is useful for thinking about the real-world, even though real-world applications have friction, air resistance, etc.

            1. 5

              One final attempt to articulate the tiny area of disagreement amidst all our overlapping agreements:

              As I see it, the rhetoric around essential and accidental complexity is deeply tied to getting programs “correct”. The evolution I had in my thinking after that post I linked above was in realizing that it’s meaningless to ask if a program is correct. You can only ask if programs correctly satisfy certain properties.

              If programs don’t have a single global property to optimize for, it’s much harder to reason about essential vs accidental complexity. Some aspect of a program could be essential for one property (e.g. memory safety) but accidental for another (e.g. fault tolerance). So the distinction seems a lot less interesting to me in recent months.

              1. 2

                As I see it, the rhetoric around essential and accidental complexity is deeply tied to getting programs “correct”

                Personally, I’ve always heard accidental complexity invoked in the context of what makes something unreadable, or difficult to understand, or bloated. It’s: “why is this thing that should be simple so huge and unwieldy?”

                Based on your about page, very much in line with your areas of research, it looks like. Regardless of the common use of the rhetoric, that’s the part of it I find value in.

                1. 3

                  That’s fair! I was thinking about correctness because neither OP nor Fred Brooks’s original paper mention readability.

                  1. 2

                    I would add “Hard to Operate” to the list as well. Basically, things that are built such that they are an operators nightmare.

          2. 2

            Not sure how that is a disagreement if security, privacy and safety are part of the user requirements.

            1. 2

              They rarely are. Most companies only care about security after they’ve been breached.

              1. 3

                Absent an explicit spec, “accidental” and “essential” are just very subjective terms, and there’s no easy way to pin down a firm boundary between them. Legitimate differences of opinion are highly likely.

                And, as you may be aware, specs are hard. Like, easy to get wrong. Usually incomplete.

            2. 2

              making things secure is an essential part of complexity

              Wait, what? I have no idea what you mean by this. I’ve definitely seen a lot of highly complex software that had not even Clue 1 about security anything.

              1. 1

                Sorry, I mean it’s a kind of essential complexity.

                1. 1

                  Concrete example: OpenSSL, LibreSSL, S2N. Rank by complexity and security; observe correlation.

                  Another one: Ubuntu, FreeBSD, seL4. Same exercise.

                  These are just examples and don’t necessarily prove a more general point. But I’m hard pressed to think of an example where a more complex system is more secure just by virtue of being more complex. Usually it’s the opposite; we make things more secure by reducing the attack surface to something manageable.

                  I guess you’re just thinking of relatively security-oblivious projects that struggle to “add security” as an afterthought, maybe once the horse has already left the barn so to speak. That’s kind of an uphill battle, but I’ll grant it’s an unfortunately commonplace scenario. I’d argue it generally places the defenders into a reactive position, a whack-a-mole arms race where they’re always at least one step behind the attackers. Sounds familiar, right?

                  Making a system more secure may entail making it more or less complex than it was; it really depends on the specific situation. But adding complexity (“essential” or not) generally makes a system less secure, all else being equal.

                  OK, that horse was probably dead already, but man is it really dead now. Sorry, I got a bit triggered back there.

                  1. 4

                    Lemme try clarifying what I meant: I’m arguing that trying to be secure adds complexity, especially if you have other requirements, too. But it’s also complexity we can’t write off as accident, and also it’s complexity that many users will write off as accidental. So it’s essential complexity, in that we can’t ignore or externalize it as a requirement and so makes things more complex.

                    1. 1

                      Ah, OK, you’re thinking of complexity of requirements, where I was thinking complexity of implementation? That makes some sense. (I don’t wanna touch the “essential”-vs-“accidental” distinction, I think it’s a reddish herring, a vintage herring that has outlasted its sell-by date, kinda like I said over there.)

                    2. 1

                      But I’m hard pressed to think of an example where a more complex system is more secure just by virtue of being more complex.

                      Windows NT is quite a bit more secure than DOS.

                2. 1

                  Is it just that there needs to be a finer grained meaning or is it that, in the real world, accidental v. essential complexity doesn’t answer the question of “Is this complexity avoidable?” Avoidable v. unavoidable complexity is a more practical categorization but harder to make, as the paper notes that some forms (but implies that not all) of accidental complexity are unavoidable.

                  That’s not to say that a finer grained categorization of essential and accidental complexity wouldn’t help to answer that question, but I’m dubious that the problems we see can be solved solely by avoiding complexity when things like poor user understanding of what they want software to do: the the clarity of problem specification that this paper assumes deteriorates in real world conditions.

                  1. 1

                    I feel like the finer grained meaning from the paper is an elegant way of framing what a lot of people in the industry “feel” right now: FP simply allows devs to discern the difference between accidental and essential complexity, and presents a tradeoff worth taking a risk on. I would love to see a similar post on No Silver Bullet.

                3. 1

                  A fantastic, life changing paper, and a great summary.