1. 28
  1.  

  2. 22

    Pretty sure calling on the fame of a few to validate an argument and/or a position is a fallacy. One of the things I liked most when I coded in C#/VB.Net under the Framework.Net like 7 years ago was the debugger. I think it was pretty darn awesome. And it saved me a LOT of time.

    It’s much easier to set a break point and examine the state of the system around you at that point than it is to pepper printf statements around your code. I think there’s an objective argument to be made about using the right tool for the right job.

    1. 11

      Among other mistakes, I’m pretty sure Linus is not personally debugging all 15 million lines of the kernel, so it’s fairly misleading to say it works for him.

      1. 6

        In the actual article Torvalds talks about using gdb, just not supporting a “kernel debugger”.

        It seems that the author is just plucking out the little bits that sort of make their point.

        1. 2

          Yeah, I work in kernel land every day, I’d call crash/gdb a debugger, I may not be stepping kernel code line by line but honestly that wouldn’t work anyway.

          Is printk a debugger?

          1. 5

            Print is a debugging technique, but I wouldn’t call it a debugger. I would say that a required feature of a debugger is that it allows inspection of program state that was not predetermined at compilation time.

            1. 1

              Sure but with dynamic debugging, I have stupid scripts watching dmesg -w that turn on/off certain functions that I’m looking at. Which is just splitting hairs really.

      2. 3

        I absolutely agree with you. printf may have lower barrier to entry but is limited. On the other hand many implementations have horrible debugging tools, e.g. Python, Ruby MRI. There are tools that build upon the built-in capabilities but having to look for a third-party software might be a barrier to adoption.

        1. 2

          The author of Python, Guido van Rossum has been quoted as saying that uses print statements for 90% of his debugging.

          Part of me believes that the reason for this is that Guido finds pdb awkward too.

          1. 1

            I absolutely agree with you. printf may have lower barrier to entry but is limited.

            Actually it is the other way around. It is debuggers that are limited–print/log statements are universal. There are many situations where you can’t use debuggers, but I have yet to encounter one where some variation of print statements cannot work. Maybe there are some really low level embedded system situations where print statements are not feasible, but in those cases you certainly won’t have debuggers either.

            1. 3

              Print statements do not work when you are debugging low-level stuff like object file formats, linking, and machine code generation. In fact, it’s often the case you don’t even have a standard library available. Usually the debugger is the only way to inspect a running program.

              1. 4

                Print statements do not work when you are debugging low-level stuff like object file formats, linking, and machine code generation.

                What? Print statements are the easiest way to debug those. Stepping through the code in a debugger is almost intractable, since you’re doing lots and lots of transformations that need to be correlated, and can be far away from each other.

                I do use a debugger, but generally I use the debugger as a glorified way of inserting print statements without recompiling. From a recent debugging session for the Myrddin compiler, I had a questionable node ID in the debug dump:

                 ; isn't generic: skipping freshen
                        Delay $54 -> union
                        `Some int
                        `None
                 ;;
                    Unify $54 => foo.u
                            indexes: tynil => tynil
                

                At which point I run it again in gdb, and poke around when I’m unifying this:

                (gdb) b unionunify
                Breakpoint 5 at 0x425ecb: file infer.c, line 840.
                (gdb) cond 5 u->tid == 54
                (gdb) call dump(n, stdout)
                Nexpr.69@9 (type = (foo.u -> void) [tid 51], op = Ovar, isconst = 1, did=7)
                    Nil
                    Nname.68@9(foo.nope)
                

                I could have added the calls to dump in by hand, with conditionals, but I basically use a debugger to do the same thing. Printf debugging and debugger debugging are – for me – more or less the same, but with fewer recompiles.

                (Also, since when were object formats and machine code generation low level tasks? The result is low level, but the code that generates them can be as high level as you want. It’s just something that reads input and printf()s bytes, nothing fancy.)

                1. 1

                  I agree. But when you don’t have debugging symbols nor the ability to call things because, say, your call stacks are not correct yet, this doesn’t work. This is especially the case when working cross platform.

                  Right now I’m porting Swift to z/OS and making LLVM target mainframe object file formats. It is not the first time I ’ve been in this sort of environment. I use the print/dump approach when generating the files and it' s great, but it doesn’t work when you actually link them because there’s no DWARF support yet and the specs aren’t as precise as one would hope. So I’m relegated to using stepi in dbx when you can even run something. That’s the low-level I’m referring to.

                  1. 1

                    I agree. But when you don’t have debugging symbols nor the ability to call things because, say, your call stacks are not correct yet, this doesn’t work. This is especially the case when working cross platform.

                    Even then, I tend to use the ‘write’ system call directly for logging [it’s usually not so hard to get that working by thinking about it.]. For dumping data, I write the values directly and interpret a hex dump.

                    When I do step through instruction by instruction – which is sometimes useful, but very rarely – it’s usually because adding the write calls introduces spills that change the incorrect assembly.

              2. 3

                Maybe there are some really low level embedded system situations where print statements are not feasible, but in those cases you certainly won’t have debuggers either.

                To the contrary, if your embedded system has JTAG, debugging is always available, unlike character output.

              3. 1

                I dunno, pry in ruby is pretty good. I guess it’s third party so maybe that’s your point, but it’s one of those ubiquitous things in the ruby ecosystem these days… hard to imagine developing without it honestly.

              4. 1

                Yes… Carl Sagan: “Authorities must prove their contentions like everybody else.”

              5. [Comment removed by author]

                1. [Comment removed by author]

                  1. 3

                    “Debugging” is such a vague term, it’s impossible to say whether using a debugger in every situation is appropriate or not.

                    If I’m given a core dump, I’m going to ask what the user was doing, then almost certainly load the core into gdb almost every time. On the other hand, if the bug report says, “I did this, then that, then the other thing, and got a behavior I didn’t expect,” then I’m probably not jumping straight to the debugger and start with another approach. It’s very context sensitive.

                    At the end of the day, a debugger is a tool, and it requires thought and experience to know when and how to use it most effectively. I don’t blindly reach for a hammer in every situation, and I don’t blindly reach for gdb either.

                    The thing that annoys me about articles like this one is that they discount the debugger as if it’s a macho badge of honor not to use one. “Look at me, I don’t use a debugger, I just think about the problem really hard and solve the problem!”

                    I prefer to think about the problem and use the debugger to verify my assumptions.

                2. 10

                  Seems to me like the author is/was using the debugger incorrectly. What is the point of stepping through line by line watching variables? If nothing else, that seems like an incredibly tedious and inefficient way to find errors.

                  The biggest value of a debugger, IMO, is the ability to set breakpoints, catch exceptions, etc. and examine values in specific situations. It’s like adding print statements, but you don’t need to modify the code and recompile all the time.

                  Also, the links to famous programmers saying they don’t use debuggers are misleading and sometimes wrong.

                  • In the Linus Torvalds link, Linus says in the first paragraph: “I don’t like debuggers. Never have, probably never will. I use gdb all the time, but I tend to use it not as a debugger, but as a disassembler on steroids that you can program.” I don’t know why he says, “not as a debugger,” because a “disassembler on steroids that you can program,” could be part of the definition of a debugger.

                  • The Brian W. Kernighan, Rob Pike, and Guido van Rossum Guido quotes are also misleading. It says they mainly use print statements, but they don’t say they never use a debugger. And see my second paragraph - print statement debugging is easier done inside the debugger in most cases.

                  1. 10

                    I’ve never understood the religious war against debuggers.

                    When people rant about this, I think they’re harping on lack of debugging skills, and poor code. If the code is bad enough, you need a debugger to visualize the enormous state-space created by it.

                    Additionally, certain environments lend themselves to certain types of tools. I suspect print-style debugging is currently all the rage because dynamic languages make this sort of small change very cheap to add. I don’t feel it’s morally superior to a debugger; rather, it’s far more imprecise, and wastes my time formatting output.

                    But, really, this is more Real Programmer tripe. It has a certain tone-deafness to it: “just write correct programs and then you don’t need a debugger! Linus agrees with me, are you going to disagree with him?!?”

                    1. 4

                      Think of a debugger as a code navigation tool.

                      It allows you to navigate forwards and backwards (especially with reversible debugging) along the code path to hone in on the problem faster.

                      What then?

                      Usually I go back along the code path to see the earliest point at which I can decide that a value is incorrect.

                      I then add an assert.

                      I then see if my unit test suite hits that code path, if not I extend it to do so.

                      It should hit that assert and fall over.

                      Now I’m in the unit test world, it’s more about defect localization.

                      There should be only one reason for a test case to fail, and your should be able to make a damned good guess, based on which test failed, which code was wrong.

                      Often bugs are “integration bugs”.

                      This unit is correct, that unit is correct, they are wired together wrong.

                      In which case it’s about adding precondition asserts, it’s about adding contract tests. (Tests that both sides obey the contract of the interface)

                      Tl;DR; Debuggers are just another code navigation tool, like grep or cscope. They have the cute property of being able to tell them to trace along the code path until things are obviously wrong then stop. You can then tell them to go back along the code path to see who you got there.

                      1. 3

                        Most commonly, I’ll use gdb to figure out quickly where a C program is seg faulting. It isn’t always a quick task, but usually it is. From there, fprintf around the area gets me through most problems.

                        1. 3

                          Although the appeal to authority in this post isn’t satisfying, I think it’s a useful message. Working in environments without a debugger, particularly some functional languages, made me realize what a crutch the debugger had become for me. Growing up with the Turbo C-style IDE debugger, this became the main way I interacted with my programs, like a crippled REPL.

                          For application-level problems (i.e., not problems like code generation bugs in your compiler or similar hairiness), I realized I was much better served by reasoning carefully about the code, adding assertions, forming hypotheses, and verifying them through tests.

                          My feeling now is that if you aren’t using the debugger in a targeted way to validate hypotheses you’ve formed outside it, you’re probably wasting your time. In languages at a higher level than C, you probably don’t need a debugger. (And alternate tools, like valgrind, really help in C, too.)

                          1. 2

                            A debugger isn’t a crutch if it’s a truly useful and productive tool. Certainly the Smalltalk debugger isn’t a “crippled REPL”.

                            1. 1

                              As impressive as the Smalltalk environment is, I guess I never “got it”, because the Smalltalk debugger is part what I considered when I wrote the above. And so much clicking. (In Smalltalk, I preferred tests and interaction with the transcript. That probably reflects my inability to fully embrace the Smalltalk way of things.) It’s powerful, but one could argue the Visual Studio debugger is even more powerful, and they’re both the kind of tool that doesn’t scale, as the OP is talking about, IMHO.

                              Sorry, I’m probably missing something about it.

                              Also, I realize, re-reading some of these comments, that it might be more precise to say something like “stepping through your code line by line, inspecting arbitrary state, is usually a waste of time” (except for its pedagogical value to beginners, which is probably how people develop their debugger addiction).

                          2. 2

                            My ultimate goal when work on a difficult project is that when problems arise, as they always do, it should require almost no effort to pinpoint and fix the problem.

                            “I don’t maintain legacy software that was architected by anyone less wise than me,” he bragged.

                            1. 2

                              I never use debuggers, but I never started using them and never wanted for one. I wonder if I’m actually missing out.

                              1. 2

                                Hmm, I’d imagine even though the author and the big names dropped in this post don’t use a debugger (as in software purpose-built for reading generated debug information), they likely still do use the generated debug information in their print statements or other means of debugging (source code line numbers, stack traces, thread dumps, core dumps, etc).

                                As a side note, I’d like to know how these same people mentioned debug production software (that is, software whose debug information has likely been optimized away).

                                1. 1

                                  About production software; it’s worth noting that ptrace can be the kiss of death for production software under heavy load, so gdb is usually not so helpful there. Usually logging and unobtrusive measurement tools (like perf) are more useful.

                                  One thing related to that I didn’t see mentioned yet is reverse engineering: a good debugger is still essential for that purpose.

                                2. 1

                                  This article sadly the regex match of “. is DEAD!” where in the body of the article the author makes a fairly decent case for why . is in fact NOT dead but they/their team/company/whatever chose not to use it.

                                  I don’t have a problem with writing up your experiences, but IMO being trolled by a largely disingenuous statement in the title is not the way to convince me of the quality of your argument.

                                  1. 1

                                    To add to this, with frameworks and interpreted languages (Python) using a debugger can be a scary and unproductive time.

                                    1. 1

                                      Then you’re really missing out. If you can’t use debuggers, using prints to output variable values IS debugging. Also, article is a bit misleading on Linus “I don’t use a debugger” part, he did not say that he doesn’t use debugger at all.