1. 71
  1.  

    1. 20

      One of my colleagues is following Clean Code. He’s trying to hide everything in classes and methods (even simple things like loops). It’s almost impossible to read his code because you constantly have to switch to another files / methods.

      I never read Clean Code, but if this book encourages these practices I don’t have any desire to look at it.

      1. 18

        I really enjoyed reading this. There are good summaries at the end of each section that both authors agree represent their viewpoints.

        Of course, this transcript resulted in me agreeing with Ousterhout entirely, and being puzzled by Martin’s takes on the points where they disagree. I’m curious to hear if that’s because I was already primed by anti-Clean Code rhetoric in advance. Do fans of Clean Code read this and think it’s fair and accurate?

        1. 12

          The main feeling I have is that Uncle Bob is more of an evangelist than a technologist. He advocated extremely strongly for TDD, for SOLID, for short functions, for self-documenting code… And back when I hadn’t tried these things at all, his evangelism was part of what convinced me to make the attempt, and I observed my code quality improve significantly as a result. It’s certainly possible to go too far, and I’ve done that, too… But I don’t know if I’d have as much of a sense for the limitations of the approach if I hadn’t attempted the dogma first.

          1. 9

            I too, agree that I find Uncle Bob’s reasoning puzzling. The two hardest problems in Computer Science is cache invalidation and naming things, and Uncle Bob wants to create yet more names?

            Uncle Bob’s arguments against comments is also puzzling—one place were comments are gold are for workarounds for bugs in code you don’t control, or to provide a reference where an algorithm was described (I notice that neither one of them mentioned the Knuth article describing the primes generator in a comment).

            1. 5

              The Reddit thread has some commenters who side with Martin more than they do Ousterhout. Others also described Ousterhout as different shades of “not cool”, but that may be them siding with Martin more.

              1. 9

                Redditors tend to like Bob Martin’s personality or pseudo-“scientific method” approach, and so I often see TDD and Martin defended with cult-like ferociousness.

                1. 1

                  Lobsters tend to like the little mermaid’s personality or pseudo-“fish” approach, and so I often see Tail Driven Development and Ariel defended with cult-like ferociousness.

              2. 1

                One problem with this critique of Clean Code is that Uncle Bob presents a rule of thumb, which Ousterhout goes on to interpret as a law, and critiques it as such.

              3. 11

                Great read. I generally found myself on Ousterhout’s side, but I was struck by one disagreement I had with both authors:

                UB: The first sentence is redundant with the name isMultipleOfNthPrimeFactor and so could be deleted. The warning of the side effect is useful.

                JOHN: I agree that the first sentence is largely redundant with the name, and I debated with myself about whether to keep it.

                It’s interesting to see the assumption here that redundancy is inherently bad. Redundancy in natural language is important, and often helps reinforce and clarify meaning. Particularly for important information which is going to be needed frequently (which I would argue covers method interfaces), don’t hesitate to convey it multiple times!

                1. 7

                  I haven’t read it all yet but the section on TDD stood out to me.

                  • Ousterhout mischaracterizes TDD in his book and hasn’t actually used it, yet criticizes it

                  • UB says it’s the only way to go

                  I think a bunch of other positions are possible.

                  Ousterhout doesn’t see how TDD can encourage a decoupled design; I strongly believe that TDD does that - the idea that you mock existing interfaces only is common but I disagree - you design the API your code needs, rather than mock an existing API. That’s where the decoupling comes from. Ousterhout seems to believe TDD leads to worse design; I don’t agree.

                  On the other hand, I agree with Ousterhout that writing tests just after is fine, and writing unit tests before is unnecessary to get most of the benefits of testing. I also agree that it’s often work well to go in “chunks” rather than the smallest step possible.

                  I practice pure TDD sometimes, and I often take small steps, but I am happy to write the test just after or just before, whatever I feel like. I’m also happy to sometimes take a larger step if I feel confident.

                  I love TDD a a teaching tool. When you practice TDD you learn how to take small steps, which is very valuable. You learn how to really think about your API before you build it, as you have to write a test. You learn how to design for testability, which can mean more decoupled code. You learn to refactor early and often. So TDD is a good way to build up a lot of skills and practices that are useful to have in your arsenal. Many developers don’t know how to take small steps, or how to think about API design by writing example code, or how to think about coupling and testability and TDD teaches that.

                  But after doing TDD for a long time in many contexts, I often have a pretty good idea by now how to write decoupled code without needing TDD to spell it out for me. I know when in certain areas I feel confident enough in my own knowledge (and the type checker) to take a larger leap, just as I know how to switch gears and do smaller iterations when that makes sense. And I’m pretty sure a developer can learn these things without practicing TDD (Ousterhout did); I just think that practicing TDD, especially in a pair or group context (in a safe space like a dojo), is a very effective way to learn to get better at programming.

                  1. 4

                    Reading through both the implementations I have not been able to figure out / convince myself that this loop

                    for (int i = 1; i <= lastMultiple; i++) {
                        while (multiples[i] < candidate) {
                            multiples[i] += 2*primes[i];
                        }
                        if (multiples[i] == candidate) {
                            continue candidates;
                        }
                    }
                    

                    can filter out every non-prime number - where is the mathematical proof that every odd non prime number less than the square of a prime will be a multiple + 2 times one of the primes we have already encountered.

                    I have always found that why a piece of code works is among the most difficult things to figure out. I went and looked at the original Knuth paper which has the following explanation

                    The remaining task is straightforward, given the data structures already prepared. Let us recapitulate the current situation: The goal is to test whether or not j is divisible by pn, without actually performing a division. We know that j is odd, and that mult[n] is an odd multiple of pn such that mult[n] < j + 2pn. If mult[n] < j, we can increase mult [n] by 2pn and the same conditions will hold. On the other hand if mult[n] ≥ j, the conditions imply that j is divisible by pn if and only if j = mult [n].

                    This explanation still feels hand-wavy to me - where is the mathematical proof that makes this obvious

                    The biggest win for me from reading the Ousterhout implementation was figuring out what part of the code I did not understand. I still wish there was a link to a mathematical proof that could explain why that optimization is reasonable.

                    1. 3

                      where is the mathematical proof that every odd non prime number less than the square of a prime will be a multiple + 2 times one of the primes we have already encountered.

                      Assume m is odd, composite, and less than p² (p is prime). Since m is composite, it has a least prime factor q. We know that

                      • q is less than p because m is less than p²;
                      • q is odd because m is odd;
                      • m is an odd multiple of q because m is odd, i. e. m = kq with k odd.

                      Hence m will be flagged as composite when we consider kq in the inner loop.

                      1. 1

                        Isn’t this the standard Sieve of Erathosthenes as outlined here https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes ?

                      2. 4

                        I get the feeling that Uncle Bob developed TDD in the environment he found himself in most—with a legacy system that needs to be modified and thus, the smallest amount of code is preferred over large amounts of code. For a green-field program, I think TDD would be a miserable failure, much like Ron Jeffries’ sudoku solver.

                        1. 9

                          Just fyi, Bob did not invent TDD. Your thought still stands as a possibility, but wanted to point that out to readers.

                          1. 3

                            TDD is test driven design. If you already have large amount of badly written code, you can’t really design it. Not saying that testing is bad or no design happens at all, just that it is more of a refactoring tool in this context, rather than design.

                            1. 6

                              As far as I know TDD means Test Driven Development, or at least it’s how I’ve always seen it. Was UB refering to Design for TDD?

                              1. 2

                                Sorry, I stay corrected. Somehow I believed all my life that jast D stands for design.

                                1. 4

                                  There are certainly some hardcore TDD fans who insist that TDD is a design methodology! But that’s not what it was initially coined as.

                                  1. 2

                                    Yes there is a school of thought that does preach the D as “design”. The idea is, you never add new code unless you’re looking at a red test, and thereby you guarantee both that your tests really do fail when they should, and that your code truly is unit testable.

                                    I’m not really an advocate except in maybe rare circumstances, but that’s the idea.

                                    So the original D meant “Development”, and then another camp took it further and advocated for it to mean “Design”.

                            2. 3

                              I’m curious what you mean? My experience is that TDD is much better in a greenfield project, because you’re not beholden to existing interfaces. So you can use it to experiment with what feels right a lot easier.

                              1. 2

                                In my understanding of TDD, one writes a test first, then only writes enough code to satisfy that one test. Then you write the next test, enough code to pass that test, and repeat. That is my understanding of it.

                                Now, with that out of the way, a recent project I wrote was a 6809 assembler. If you were to approach the same problem, what would be your very first test? How would you approach such a project? I know me, and having to write tests before writing code would drive me insane, especially in a green field project.

                                1. 7

                                  I wrote a Uxn assembler recently and while I don’t practice TDD at all in my day-to-day, in this case it felt natural to get a sample program and a sample assembled output, add it to a test and build enough until that test passed, then I added a second more complex example and did the same, and so on and so on. I ended up with 5 tests covering quite a bit of the implementation. At the start I just had the bare minimum in terms of operations and data flow. By the fifth test I had an implementation that so far has done well (although it’s not perfect, but that was a explicit tradeoff, once I find limitations I’ll fix them)

                                      #[test]
                                      fn test_basic_assemble() {
                                          let src = "|100 #01 #02 ADD BRK".to_string();
                                          let program = assemble(src).unwrap();
                                  
                                          assert_eq!(program.symbol_table, BTreeMap::new());
                                          assert_eq!(
                                              program.rom,
                                              vec![0x80, 0x01, 0x80, 0x02, 0x18, 0x00],
                                          )
                                      }
                                  
                                  1. 3

                                    I’ve not done this with an assembler, but I’ve tried to do this with projects with a similar level of complexity, including a Java library for generating code at runtime. This is probably a skill issue, but I always end up with a lot of little commits, then I end up with some big design issue I didn’t anticipate and there’s a “rewrite everything” commit that ends up as a 1000 line diff.

                                    I still aim to do TDD where I can, but it’s like the old 2004 saying about CSS: “spend 50 minutes, then give up and use tables.”

                                  2. 4

                                    First, you are totally correct that “true” TDD proponents say that you have to drive every single change with a failing test. Let me say that I don’t subscribe to that, so that might end the discussion right there.

                                    But, I still believe in the value of using tests to drive development. For example, in your assembler, the first test I would write is an end to end test, probably focusing on a single instruction.

                                    To get that to pass, you’ll solve so many problems that you might have spent a bunch of time going back and forth on. But writing the test gets you to make a choice. It drives the development.

                                    From there, more specific components arise, and you can test those independently as you see fit. But, an assembler is an interesting example to pick, because it’s so incredibly easy to test.

                                    1. 1

                                      First, you are totally correct that “true” TDD proponents say that you have to drive every single change with a failing test. Let me say that I don’t subscribe to that, so that might end the discussion right there.

                                      Then I can counter with, “then you haven’t exactly done TDD with a greenfield project, have you?”

                                      When I wrote my assembler, yes, I would write small files and work my way up, but in no way did I ever do test, fail, code, test, ad nauseam. I was hoping to get an answer from someone who does, I guess for a lack of a better term, “pure TDD” even for greenfield development, because I just don’t see it working for that. My own strawman approach to this would be:

                                      #include <stdio.h>
                                      #include <string.h>
                                      
                                      int main(void)
                                      {
                                        char buffer[BUFSIZ];
                                        scanf("%s\n",buffer);
                                        if (strcmp(buffer,"RTS") == 0)
                                          putchar('\x39');
                                        else
                                          fprintf(stderr,"bad opcode\n");
                                        return 0;
                                      }
                                      

                                      That technically parses an opcode, generates output and passes the test “does it assemble an instruction?”

                                      1. 2

                                        What is the problem with the code you posted?

                                        I have done “true” TDD on a greenfield project, and it was fine. It’s just an unnecessary thing to adhere to blindly. From the test you have here, you would add more cases for more opcodes, and add their functionality in turn.

                                        Alternatively, you could write a test of a whole program involving many opcodes if you want to implement a bunch at once, or test something more substantial.

                                        1. 1

                                          It’s just that I would never start with that code. And if the “test” consists of an entire assembly program, can that really be TDD with a test for every change? What is a change? Or are TDD proponents more pragmatic about it than they let on? “TDD for thee, but I know best for me” type of argument.

                                          1. 2

                                            Yes, you could make new tests that consist of new programs which expose some gap in functionality, and then implement that functionality.

                                            A change can be whatever you want it to be, but presumably you’re changing the code for some reason. That reason can be encoded in a test. If no new behavior is being introduced, then you don’t need to add a test, because that’s a refactor. And that’s what tests are for: to allow you to change the internal design and know if you broke existing behavior or not.

                                            1. 2

                                              I guess I was beaten over the head by my manager at my previous job. He demanded not only TDD, but the smallest possible test, and the smallest code change to get that test to pass. And if I could fine an existing test to cover the new feature, the better [1].

                                              [1] No way I was going to do that, what with 17,000+ tests.

                                              1. 2

                                                Yea that’s a whole different story. We were talking about what’s possible for a bit, but you’re asking should you do this.

                                                Dogmatic TDD is not necessary, and doesn’t even meet the desired goal of ensuring quality by checking all cases. There are better tests for getting higher coverage, for example property based tests.

                                                For me, the sweet spot is simply writing tests first when I’m struggling to make progress on something. The test gives me a concrete goal to work towards. I don’t even care about committing it afterwards.

                              2. 3

                                The first thing I took away from this is that if I’m ever in a debate with John Ousterhout, he’ll put words in my mouth, accusing me of feeling and believing things I don’t. I’ve never seen Bob Martin so diplomatic.

                                As for the goal of fearless refactoring, we now can choose languages that inherently provide an awful lot of that. Besides particularly complex functions, I would rather most explicit testing effort be spent at the feature scope.

                                1. 6

                                  Your response interests me because I didn’t notice that while I was reading the dialog. For my edification, would you mind pointing out one or two parts that generated those feelings?

                                  1. 9

                                    Sure. For example,

                                    I disagree; this illustrates your bias against comments.

                                    Martin had just finished saying some comments are good and he then had to reinforce that he doesn’t hate them.

                                    He has to correct Ousterhout again later:

                                    Sorry to interrupt you; but I think you are overstating my position. I certainly never said that comments can never be helpful…

                                    A few paragraphs later:

                                    Given your fundamental disbelief in comments…

                                    I was also put off by several uses of the word “Unfortunately”. They often read as jabs, positioning Ousterhout’s own opinion as if it’s ground truth, like “It’s too bad you’re wrong about that.”

                                    In the same theme:

                                    I think what happened here is that you were so focused on something that isn’t actually all that important (creating the tiniest possible methods) that you dropped the ball on other issues that really are important.

                                    It just comes off as dismissive to me. However, it’s unlikely it was really that bad for the participants interacting live. Text makes a lot of things sound worse.

                                    1. 2

                                      Thanks for the insights. I appreciate it.

                                      1. 1

                                        I think it’s ok to believe somebody thinks something that they don’t believe they think. In other words, if someone proposes a set of ideas, and those ideas have a logical conclusion, but they disavow the conclusion (while holding onto their ideas), you can justifiably ignore their disavowal.

                                        1. 4

                                          That all holds up logically, but emotionally, I wouldn’t want to be on the receiving end of that vote of low confidence. What’s suitable to believe is not always appropriate to say.

                                  2. 2

                                    I read Clean Code as an undergraduate and am happy that I un-learned most of its lessons.

                                    John Osterhout’s advice seems normal, flexible and pragmatic, whereas Martin reads like a set of “rules” to follow which don’t really have any benefit that I can identify.

                                    1. [Comment removed by author]

                                      1. 0

                                        A really interesting contrast to both of the prime generator implementations is to ask an LLM to comment the algorithm of Knuth’s original code:

                                        Here’s some of the versions of that (I ran this a couple of times against various versions of the code / with various prompts using Claude 3.7 via GitHub CoPilot):

                                        Class comment:

                                        /**
                                         * A utility class for generating prime numbers using a modified Sieve of Eratosthenes algorithm.
                                         * This implementation uses a technique of tracking the next multiple of each prime
                                         * that needs to be checked, rather than using trial division or a traditional sieve.
                                         */
                                        
                                        /**
                                         * A utility class for generating prime numbers using a modified sieve algorithm.
                                         * This implementation uses an optimized approach to find the first n prime numbers.
                                         */
                                        

                                        Method comment:

                                        /**
                                             * Generates an array containing the first n prime numbers.
                                             * 
                                             * Algorithm description:
                                             * 1. Initialize with 2 as the first prime number
                                             * 2. For each odd number, starting from 3:
                                             *    a. Check if the number equals the square of the next prime to be considered
                                             *       - If so, add to the multiples list and mark as not prime
                                             *    b. Otherwise, check if the number is divisible by any previously found prime
                                             *       - For each prime p, find the next multiple of p >= the current number
                                             *       - If the current number equals any of these multiples, it's not prime
                                             *    c. If the number passes all tests, it's prime and added to the result
                                             * 
                                             * @param n The number of prime numbers to generate
                                             * @return An array containing the first n prime numbers
                                             */
                                        
                                        /**
                                             * Generates an array containing the first n prime numbers.
                                             * 
                                             * This method implements an optimized sieve-based algorithm that works as follows:
                                             * 1. Start with 2 as the first prime number
                                             * 2. For odd numbers starting at 3, determine primality by:
                                             *    a. Checking if the number equals the square of the next prime to add
                                             *    b. Otherwise, using a list of multiples to efficiently eliminate non-primes
                                             * 3. The algorithm maintains an ArrayList of special multiples to avoid 
                                             *    redundant divisibility tests
                                             * 
                                             * The implementation is storage-efficient compared to traditional sieve methods
                                             * as it only tracks necessary multiples of previously found primes.
                                             * 
                                             * @param n The number of prime numbers to generate
                                             * @return An array containing the first n prime numbers
                                             * @throws ArrayIndexOutOfBoundsException if n <= 0
                                             */
                                        

                                        IMO, both of these comments are better either of Uncle Bob’s and John’s versions of the same thing.

                                        1. 7

                                          And a better summary of the algorithm than all of the above is something like:

                                          Implements an incremental Sieve of Eratosthenes method for finding the first n prime numbers with the following optimizations:

                                          1. Only consider odd numbers as candidate primes, as even numbers are never prime
                                          2. Instead of using an array to store all multiples, maintain a list that only contains the largest multiple of each prime checked so far. This results in a log(n) storage requirement.
                                          3. Start checking for multiples of numbers at p^2 when checking prime factors, as any non-prime numbers below p^2 will have at least one factor less than p which will have been found in a previous check