1. 13

I see a lot of arguments that “you don’t need comments if your code is self-documenting.” I keep pushing for examples, but I only get toy examples. I’m looking for real world examples, where the code is part of an established, public codebase that is used by people to do things. This is important because then it’s had to obey several different business concerns, so we see how the idea holds up under pressure.

I haven’t been able to find any of these. Do any of you have codebases you’d recommend? Thanks!


  2. 10

    I’m not sure most people really think that “you don’t need comments if your code is self-documenting”, but rather think that “you don’t need many comments if your code is self-documenting”. There is a subtle – yet important – difference!

    So if you’re looking for self-documenting code without any comments at all, then yeah, you might not find it.

    I’m firmly in the “self-documenting” camp, but I often add a single line comment above a function to describe what it does in plain English. I also prefer to group things with comments, especially in slightly longer comments.

    But stuff like f = getFoo() // get a foo object from the database? Yeah nah.

    1. 3

      Indeed, excessively self documenting functions lead to 27 word function names. I do like the kind of comments that show up when I hover over a function in particular however.

      1. 8

        Names are about “What” comments are about “Why”.

        1. 3

          This one. You can encode logic, but not motivation.

    2. 5

      Hmm. I think it’s the other way round that needs defending.

      I have bashed my head literally daily against thousands of comments that are…

      • Content free - The comment says less than the function signature, less precisely, less concisely and more ambiguously.
      • Inaccurate - I usually set my editor colouring to minimize the visual impact of comments. I debug code, not comments.

      Sure, dealing with a code base like Bruce Peren’s electric fence https://github.com/CheggEng/electric-fence/blob/master/efence.c is a pleasure.

      But it is a complete rarity.

      It’s a jewel in a vast pile of commenting mush.

      But when it comes down to it, I’m dead certain from doing this stuff everyday that Rusty Russel is right….


      AfterThought: https://github.com/CheggEng/electric-fence/blob/master/efence.c#L83

      Mode		mode;

      So we can see mode is a struct field of type Mode.

      What are the rules of assignment for that field? Does it behave as struct? an array? (decays to a pointer if passed as a parameter), a pointer (hence a shallow copy)?

      I could add a comment that says “Mode is an enum” or… I could delete the typedef (and change “enum _Mode” to “enum Mode” and then whenever I use the type Mode I have to use “enum Mode” and it will be obvious.

      1. 3

        Inaccurate - I usually set my editor colouring to minimize the visual impact of comments. I debug code, not comments.

        I do this as well, it feels to be like in most environments the value of comments approaches zero over time.


        Shocked I had not seen this before, absolute gold.

        1. 1

          I’m not sure anyone has ever denied the existence of bad comments, though I have seen people argue that:

          foo++; // increment foo

          is a strawman.

          How bad can comments get? I have run across legacy classes where multiple methods’ Javadocs were literally copied from another class, and did not even use the right class name.

           * Create a new Foo
           public Bar() {}

          Many other methods used Javadocs with the wrong number, names or types of arguments.

          The real discussion, as I see it is “to what is extent is self-documenting code possible or desirable, and what are examples of it?” A mushier question is “do prevailing norms over or underemphasize comments?” but that’s rather subjective.

          P.S. Thanks for the link to Rusty Russell’s manifesto. I read it a few years ago, and have never remembered who wrote it or how to find it.

          1. 2

            As I said, Bruce Peren’s efence library is about the best commented code I have seen…. but I didn’t have to look far for a no content waste of attention comment. https://github.com/CheggEng/electric-fence/blob/master/efence.c#L194

        2. 5

          People who “just ignore comments because they get outdated” never seem to see that they’re the reason comments get outdated.

          I try to comment the shit out of my code, because otherwise how will I be able to read it back after 12 months of bitrot and working on other things?

          1. 4

            how will I be able to read it back after 12 months of bitrot and working on other things?


            • Using meaningful variable names and function names. (Yup, suffers from the same problem as comments, in that what the function does may change. But it tends to slap you in the face every time you look at it.)

            • Using short simple functions whose intent and functionality is simple and obvious. (Use Humble Function / Humble Class pattern at the higher levels)

            • Use pre-condition, post-condition and invariant (especially class invariant) asserts. Think of them as “executable comments”.

            • Use unit testing where each test case sets up and checks and tears down a single behaviour. Think of it as executable documentation.

            ie. Do good design.

            1. 5

              This is good advice, and it works in a lot of cases, but there are definitely cases where it just totally breaks down. The most common such cases, in my experience, is in code whose goal is to be very fast. It is very often the case that better performance comes at the cost of readable code. (In some circumstances, if you’re especially clever, you can write code that is both fast and readable. But I personally fail regularly at this, despite my efforts.) In cases like these, I compensate by writing lots of comments explaining the technique.

              1. 2

                The most common such cases, in my experience, is in code whose goal is to be very fast.

                This story went by recently… https://lobste.rs/s/drl28y/john_carmack_on_parallel I wish I could up vote it twice….

                I note Walter Bright the D language guy retweeted it as well.

                It’s a discipline I want to get better at.

                Because the hard rule under fast code is “no code is faster than no code”…. so odds on if it’s more complicated… it’s slower.

                You really need to prove it is worth the complexity.

                I have always loved this tale…

                From http://citeseerx.ist.psu.edu/viewdoc/download?doi=

                2 The Oberon Compiler

                Niklaus Wirth is widely known as the creator of several programming languages. What is less well known is that he also personally wrote several compilers, including the first single-pass compiler for Modula-2 that later evolved into the initial compiler for Oberon 5. These compilers distinguished themselves by their particularly simple design – they didn’t aspire to tickle the last possible bit of achievable performance out of a piece of code, but aimed to provide adequate code quality at a price-point of reasonable compilation speed and compiler size. At the time, this was in stark contrast to almost all other research in compilers, which generally had been characterized by an enormous and ever-increasing complexity of optimizations to the detriment of compilation speed and overall compiler size.

                In order to find the optimal cost/benefit ratio, Wirth used a highly intuitive metric, the origin of which is unknown to me but that may very well be Wirth’s own invention. He used the compiler’s self-compilation speed as a measure of the compiler’s quality. Considering that Wirth’s compilers were written in the languages they compiled, and that compilers are substantial and non-trivial pieces of software in their own right, this introduced a highly practical benchmark that directly contested a compiler’s complexity against its performance. Under the self- compilation speed benchmark, only those optimizations were allowed to be incorporated into a compiler that accelerated it by so much that the intrinsic cost of the new code addition was fully compensated .

                And true to his quest for simplicity, Wirth continuously kept improving his compilers according to this metric, even if this meant throwing away a perfectly workable, albeit more complex solution. I still vividly remember the day that Wirth decided to replace the elegant data structure used in the compiler’s symbol table handler by a mundane linear list. In the original compiler, the objects in the symbol table had been sorted in a tree data structure (in identifier lexical order) for fast access, with a separate linear list representing their declaration order. One day Wirth decided that there really weren’t enough objects in a typical scope to make the sorted tree cost-effective. All of us Ph.D. students were horrified: it had taken time to implement the sorted tree, the solution was elegant, and it worked well – so why would one want to throw it away and replace it by something simpler, and even worse, something as prosaic as a linear list? But of course, Wirth was right, and the simplified compiler was both smaller and faster than its predecessor.

                All that said, if you want to see well commented code… it’s a lost art.

                Look at assembler language code from the glory days of assembler programming.

                There it’s plain obvious that say an “add” was happening… the comments were like a blow by blow essay about the why.

                1. 2

                  Because the hard rule under fast code is “no code is faster than no code”…. so odds on if it’s more complicated… it’s slower.

                  You really need to prove it is worth the complexity.

                  Yes, that is the premise of my comment. :-) In my experience, though, it’s the reverse: faster code tends to be more complicated. See my other comment where I compared the “simple” version of memchr with the fast version. The size difference between them is approximately three orders of magnitude. But oh boy, it is worth it. (Note that the graphs are in log scale.)

                  Now, the benefit here is that using the optimized implementation has roughly the same complexity as implementing the naive version. So the complexity is almost entirely contained. But it’s paid somewhere.

                  1. 1

                    Fortunately for sanity that is a rare case…. and is darm close to assembler code even in rust since it is mostly asm primitives.

                    In fact, I almost wonder whether adding a rust compiler into the mix is worth it? Or would a pure asm implementation (probably copy pasted out of the Intel reference docs) do better in simplicity / speed?

                    1. 2

                      I find the Rust code easier to read that asm personally, but that’s my own personal bias. glibc’s memchr is written in asm, and its performance is roughly comparable (see the red and purple bars in the linked graph). It also simplifies compilation: all I need is a Rust compiler, instead of also needing an assembler.

                      But in my work, these cases actually aren’t rare:

                      I’m not particularly proud of the last three. I think they could all probably be simpler. But they have a crap ton of unit tests. (N.B. I am not claiming that these things have irreducible complexity. I’m just saying that I haven’t been smart enough to make them simpler, despite trying.)

                      1. 2

                        You’re clearly on that peculiar and difficult (and hopefully not thankless) end of the spectrum… writing the core libraries the rest of us mortals to piggyback on…

                        I do note a couple of things browsing about that code…

                        • SIMD/AVX primitives were devised by Satan. On the Rusty Russel scale of API Goodness they are (if you’re lucky) a low 3 out of 10 where the documentation for each one has a large bucket of fine fine print. Yup, that’s stuff is gnarly.
                        • Some like the ripgrep stuff actually on the quiet has a huge amount of function points… (before context, after context, ….) that just adds and adds up. The only guy I have seen who has a really good handle on handling combinatorial requirements explosions is Andrei Alexandrescu.. I really think he is on to something with his Design by Introspection I’d be curious to see his approach used in Rust and see how well Rust copes with it.
                        • Some of the algorithmic stuff like the NFA/DFA stuff is just plain hairy at an algorithmic level.

                        In terms of relatively low comment to code ratio doing very hairy stuff, Andrei Alexandrescu’s Checkedint library is an interesting example…https://github.com/dlang/phobos/blob/master/std/experimental/checkedint.d

                        The core idea is the compiler knows everything about the types and variables you’re using…. You should be able to ask it at compile time (in simple and understandable ways) and handle variation at compile time.

                        1. 1

                          Thanks for the links. I’ll check out Andrei’s talk and see if I can steal some good ideas. :-)

                2. 1

                  This problem regularly happened in high-assurance when writing low-level code. Their trick was to write an English description of what it did, a formal spec that made it more precise, and equivalent implementation. In parts of Karger’s VMM, they were even using assembly. They closely scrutinized and tested it against the higher-level spec, though.

                  So, translating that to mainstream development, you might have two versions of the code: one that makes the algorithm very clear with pre-/post-conditions and invariants; the highly-optimized version. Property-based testing and fuzzing can then establish equivalence while catching errors in either. That should fit the string processing libraries Ive seen you post.

                  Now, you do have an extra copy to update. Updating the simpler code should only take a tiny fraction of time of optimization, though.

                  1. 2

                    Now, you do have an extra copy to update. Updating the simpler code should only take a tiny fraction of time of optimization, though.

                    Hah, yes. My favorite example of this is memchr. In Rust, it’s:

                    haystack.iter().position(|&b| b == needle)

                    But the SSE2 accelerated version is this big pile of goop — At least it’s written in a high level language though. glibc implements this in Assembly.

                    And then double it for the AVX version. This could be reduced somewhat by using a platform independent abstraction over SIMD vectors, since the SIMD operations used here are fairly simple, but you still end up with at least one copy of goop.

                    But the simple/naive version is just one line of code, eminently readable, pretty fast on its own and portable across every supported architecture.

                    The things we sacrifice for performance…

                    1. 1

                      Yeah, those are good examples. The sequential version can be used for the parallel versions since they’re supposed to have equivalent output (albeit maybe not same order). If you wanted, might also use a parallel language or pseudocode formalism for such things.

                      1. 1

                        The outputs should be equivalent, including order.

                        But yeah, if I had a deeper intrinsic interest in formalism, I might pursue that, but I also have a strong interest in shipping and a TODO list a few years long already. :-) Thorough unit testing is good enough for now.

                        1. 1

                          Yeah that should be enough for your situation. I’d push the other stuff harder on commercial side

                          Far as ordering, I worded it that way because of difference between parallelism and concurrency. Some languages/notations with strengths in one have weaknesses in other, such as ordering. Just trying to be accurate.

                3. 1

                  So, if I were a true scotsman did good design, it’d be sweet right?

                  Of course, all the code other people write and I have to edit, and I wish they’d documented better, is that because they’re not “doing good design” too? Because I find the only time I can jump quickly in to other people’s code is when it’s both documented and well-designed.

                  And if the explanation of the what/how/why of a thing is longer than can be compressed into a singleCamelCasedName of sensible length?

                  You shouldn’t have to pollute whatever namespace in which you’re focused by breaking things up into functions that are never used elsewhere just to make the steps of a process clearer, when you could just throw in some documentation / explanation instead.

                  1. 2

                    I’m not saying good documentation isn’t a nice to have…. I’m saying given (the admittedly unpleasant choice) of good design xor good documentation I’ll choose good design every time.

                    Why? The compiler and the CPU keeps good design honest, nothing forces the commenter to be honest.

                    Yes, it’s partly a “no true scotsman” argument as I have come to define “Good Design” as “How little I need to read and understand to make a beneficial change”.

                    In the realm of multi-megaline code bases this is a fairly compelling criteria, as I have met no one who can keep such a monster in their head.

                    This sums up my relative priorities… http://sweng.the-davies.net/Home/rustys-api-design-manifesto

                4. 3

                  I think the counterpoint is that if you can ignore a comment, then why have the comment to begin with? When a comment is being ignored, it means that the information was not needed or that the same information can be had from a different source, presumably by reading the code. The latter would be an example of self-documenting code.

                  1. 2

                    Code can’t say ain’t, to misquote Sol Worth.

                    Code can only say what the authors decided to do, not what they chose to not do, and not why they chose to not do what they didn’t do. Maybe there’s no real reason one path was chosen above another path. Maybe the other path contains dragons. Maybe the other path used to contain dragons, but those dragons have been slain, or have moved on, or are worth fighting now because this choice blocks some other progress.

                    You can advocate for code documentation, but documentation is just comments in a different file, as far as the idea that comments can go out of date is concerned.

                5. 4

                  Frankly, I haven’t seen heavily-commented code since my school days. Been in the industry for… well, let’s say 10+ years. As part of my daily work, I write and read code that is extremely sparsely commented. The main reason I put or appreciate comments is when there is a gotcha, or an usual, surprising or unexpected decision made by the author. When the obvious approach is not the one taken. e.g. # Even though it's used elsewhere in this class, we can't use .forEach() here because fooMethod() is actually asynchronous 1

                  Otherwise, most of the comments you find written by students new to programming can be done away with (their purpose accomplished by):

                  1. choosing communicative identifiers for variables, method names, class names, etc.
                  2. keeping things (classes, methods, etc.) small, each having minimal responsibility
                  3. having a relatively comprehensive test suite

                  Perhaps you could offer some examples of even moderately-commented code that you think exemplifies comments that can’t or shouldn’t be dispensed with.

                  1. 5

                    Perhaps you could offer some examples of even moderately-commented code that you think exemplifies comments that can’t or shouldn’t be dispensed with.


                    • A lot of the comments here. They can’t make that code self-documenting because it’s performance-critical.
                    • Most of the stuff here: ASCII diagrams in comments to give people a better intuition for the data structures.
                    • In Cleanroom Engineering you describe the hierarchical invariants of your code as functional assertions, most often as comments. You can’t use code always because you’re describing English specs.
                    • Noting something that could be an issue later but isn’t now, like is this a bottleneck? You could make that a document but it’s a lot harder to accidentally stumble into documentation than a code comment.

                    And some contrived examples:

                    • # While we make the call to TimeFarbler.time_farble() here, the side effects happen in the time_farble -> foo -> bar -> baz chain
                    • # This path is triggered by the FooObserver and ends up touching the A, B, and C modules.
                    • # If you're looking for the Blat functionality, you're in the wrong place. Go to file XYZ instead.
                    • # We do NOT need to call CheckThingy here. It's not a bug.
                    1. 4

                      My absolute favourite code comment ever comes from DD_belatedPNG.js, a tiny JS library from way way way back in the bad old days, which worked around some (but not all) of IE6’s bugs related to the use of PNG images in the background-image CSS property.

                      PLEASE READ:
                      Absolutely everything in this script is SILLY.  I know this.  IE's rendering of certain pixels doesn't make sense, so neither does this code!
                      1. 2

                        With to your examples, can you truly not imagine ways to express that in code? E.g. the fact that someone even wants to call CheckThingy in a context where a call to it is illegal hints to me that maybe the dependency to CheckThingy that is missing needs to be made explicit, making it impossible to call CheckThingy without it.

                        In other words, if CheckThingy is not ready to be called, don’t give me a reference to CheckThingy and all its arguments! If anything, you can signal to me that CheckThingy is now ready by providing me with those values!

                        1. 1

                          I’m thinking it’s not illegal, just we’re tired of people trying to add it in pull requests because they think they need it. We could try to express these all in code, but I think they’d all be much less clear.

                      2. 2
                        1. 2

                          Likewise Webkit: https://github.com/WebKit/webkit/blob/master/Source/WebCore/css/CSSToStyleMap.cpp

                          WebKit has an extensive specification and design docs outside the code itself. For most of what you would use comments for, they just do it with other things.

                          1. 1

                            OpenSSL has few comments: https://github.com/openssl/openssl/blob/master/ssl/ssl_cert.c

                            I’m not sure OpenSSL is a great example (e.g. code involved in CVE-2014-0160 was reviewed/accepted and existed for 2 years before it ‘officially’ was discovered). They also use 1 character variable names pretty heavily, which I wouldn’t consider ‘self documenting’ code to use much. The function names are descriptive though, so I’ll give them that! Maybe this is just a weak example of ‘poorly documented code’?

                          2. 3

                            I’m not sure this is the best example but it’s one I’ve been debating with myself lately: Edward Kmett’s recursion-schemes library. There are very few comments (even doc comments) in the source. And I think the source is clearly written (to the extent that I can judge this as an advanced beginner, junior intermediate Haskell programmer). But the clarity depends on a familiarity with the recursion schemes literature and a working knowledge of Haskell. So maybe the “comments” are externalized as papers. I wouldn’t be able to use the library without that context. But, once I have that context, the code seems like the clearest expression of the concepts. Comments explaining what the code does would be less expressive than the code or would start to resemble one of the papers.

                            I’m not sure if this satisfies as “real world” - it is a public codebase and programmers do use this library to do things. Mostly I’m skeptical that there can be any consensus about what is and isn’t self-documenting. In my career (which isn’t so long yet), I’ve mostly worked on maintaining and extending other people’s code. Much of it wasn’t very clearly written. But even when there are comments, I’ve found that reading the code is the most straightforward way to understand what’s going on. There are occasional exceptions, where there is an essential context that the code doesn’t express. And it’s very frustrating when this isn’t noted in some way. But the self-documenting argument always seemed to me to be a preference for/against a certain style of reading code more than a quality of the code itself.

                            1. 3

                              I think “self-documenting” is a dead-end goal at best: The computer is under no allusions as to what the code does, and relying on comments to get an understanding of what the code does (but without reading it) seems to cause more bugs than less in my experience.

                              What is most important is that the code is clear to you: That’s when you know it’s correct because it is correct. And if it is correct, then the next guy isn’t going to have a problem with it because he or she wants the code to do something different anyway.

                              To that end, I find the most pleasurable codebases to be the ones where the author has carefully considered the ways I might wish it different, and has made it as easy as possible to (1) find out where those places are, (2) made the information I want to act on available in those places, and (3) verify that I haven’t bollocked anything else up.

                              One thing that always stands out for me is qmail, and really everything djb writes is just so well-thought-out that I’ve never had a problem with any of these things (and I did something like 600m emails a month something like 15-20 years ago, so there was never an option of not having to extend my mail server). I’d also recommend checking Arthur out, who writes code famously incomprehensible, but when you start thinking [a certain way] you find it an absolute treat reading code like this.

                              1. 3

                                I do have such codebases, but not open source… but I think it doesn’t matter, because you’d say “but that’s small!” and be right, and it’s my point.

                                Code has a sharp size cliff, at a small size. Well-written code that fits on one screen is readable without comments, equally well written code in the same language, in the same style, but just ten times bigger… isn’t.

                                I’ve seen ruby/rails models and controllers that fit in 30 lines and are clearly, strongly walled off from the rest of the universe. Those are clear. I’ve seen other code written in other DSLs with the same property.

                                I’ve seen perl scripts that are a few tens of lines, completely free of comments, that are clear and work well.

                                Someone’s going to jump on me and say “but what is the size cliff and code tends to grow and blah”. My answer is that if you have

                                • the wisdom to recognise the the size cliff in a growing codebase
                                • and the wisdom to recognise which tasks will fit on one screen if written in that style

                                then you can write comment-free code for those tasks and get those things done quickly and maintainably. If you can’t, or you don’t trust your team to remain able to next year, then you can’t.

                                Large code is different from small code in many ways. The ability to grok it without helpful comments is one of the differences.

                                1. 1

                                  Code should always have comments that don’t describe:

                                  1. The programming language semantics
                                  2. The English semantics

                                  But describe how something works.

                                  For example, getName() is self-explanatory. It has English and programming language semantics. But it hides what really happens inside, but inside we will see things like db.get('...') and so on, which also have English semantics.

                                  But getNameBasedOnThing(...) is another story. It describes what something does, and the code inside does describe how, but many-hows. Your comment should string the hows together into one coherent idea and clearly explain the relationships of these hows.