1. 6

  2. 6

    When more easily readable code starts to increase total code size, these two ideas are at odds. That dichotomy is what brought all of this to the front of my mind. I have been increasingly been hesitant to believe a refactoring just for the sake of slight readability at the cost of increased lines of code is a good thing.

    The way I have been putting it for years now is…

    The best criteria by which one can judge Good Design, is how little you need to read and understand before you can make a beneficial change to the code base.

    In years when, by todays standards, code bases were Mickey Mouse in size… other criteria may have dominated. But in this era of code bases ‘way larger than anyone human can hope to understand’…. My criteria dominates all others.

    1. 2

      Do you think time to comprehension is a function of the person and the codebase together? For example, there are some cases where I’ve understood something about a bit of code a lot faster than a someone else, and vice versa. In cases like these, I think, neither the reader of the code, nor the code itself can independently explain “time to comprehension.”

      1. 3

        Do you think time to comprehension is a function of the person and the codebase together?

        Certainly there are factors like familiarity with the general design and tools which will affect it.

        But also I have noted a “That’s the way I think” factor.

        I find reading “man bash” hurts my head, so many of the choices are “Not the way, I personally, think.” On the otherhand, most of the choices made by Matz, the Ruby guy, are the way I personally think. So I find the ruby standard libraries a breeze to read… (Sadly the .c files are a bit of a pain)

        That said, follow on principles emerge from my principle irrespective of “the way you think”… ie. Connascent Coupling is Bad. Very Bad.

        Lots of globally accessible state is Very Bad, makes it very very hard to reason about causality.

    2. 1

      I know that a few years ago, lines of code was still the best predictor we had of post-release defect rate – has this changed, or is it still the case that all the other complexity metrics are proportional to lines of code?

      1. 1

        Bugs go up with lines of code. Bernstein was another source on that. I don’t think it’s been refuted.