1. 36
  1.  

  2. 47

    Some of them seem to have concluded that the superiority of print debugging is some kind of eternal natural law.

    This looks like a strawman to me. I think that everyone would prefer a interactive, powerful, time-machine debugger. Whenever I use print debugging is when it is not worth the trouble of dealing with actually existing debuggers. I’m proficient enough in GDB, but not in PDB, so I print debug whenever I have to deal with python.

    Saying “but debuggers could be better” is not that interesting. My reaction is “99% of everything in computing - hypertext, screen sharing, data transfer etc. could be better, but that is not the position from which I get to decide what to use.”

    1. 5

      This looks like a strawman to me. I think that everyone would prefer a interactive, powerful, time-machine debugger.

      I definitely know people who sneer at the mere concept of a debugger. Or even print debugging. They just want to read and understand the code directly.

      1. 15

        Beware of bugs in the above code; I have only proved it correct, not tried it.

        – Knuth

        1. 11

          The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.

          – Kernighan

          1. 7

            Not to knock Kernighan, but he said that decades ago. We should have figured out something more effective while being just as easy to use and just as free by now.

            1. 7

              What has dramatically changed with debuggers over these decades? I’ve used interactive debugger in XCode recenty: it could as well be Turbo Pascal.

              1. 7

                The problem is Unix debuggers haven’t really caught up to Turbo Pascal. My experiences with 90’s Visual C++ have been nicer than gdb.

                1. 3

                  The author of the post is explicitly advertising reversible debugging with rr. I guess he didn’t do it that well :)

                  When you hit a crash or invalid state, you can go backwards and find what caused it.

                  As I understand it, GDB does have reversible debugging, but it’s inefficient. rr works on x86 and Linux and is efficient. It’s impressive, but so far I’m getting by with regular GDB and IntelliJ. I would like to get in the flow with rr in the future.

                  1. 1

                    In my experience GDB’s reverse debugging is not inefficient, it just doesn’t work well. It cannot handle SSE instructions, doesn’t record the whole process tree, has issues with side effects etc. The two things I like about it when compared to RR is that you can choose to only “record” (it’s not actually a recording, as far as I understand it just forks the debugee) a small part of the program rather than the whole execution and that it works on more architectures than just x86_64. But these two advantages do not outweigh all the disadvantages it has when compared to RR, in my opinion.

              2. 4

                Sometimes our thoughts lead to questions, and answering those questions leads to more thoughts and more questions. Being able to answer many questions without having to rerun the entire program each time leads to more thoughts with less effort, leading to more effective debugging.

                Debuggers are great is what I’m saying

                1. 3

                  Things like rr, which is a recent thing, doesn’t work on my AMD hardware. So for the rest of us most debuggers give you the info in the reverse order… usually you need to work your way back to the problem so they don’t avoid rerunning the program.

                  1. 3

                    Things like rr, which is a recent thing, doesn’t work on my AMD hardware.

                    Yeah, incompatibility with AMD hardware is a serious drawback. However, initial support for AMD CPUs has been merged in the past year, so thinks are improving on that front.

                  2. 1

                    I agree about debuggers. I think Kernighan and Pike on debugging (and more) has been informative to me. I use a debugger almost daily, but I think sometimes a well placed print statement is all one needs.

              3. 2

                In a perfect (statically-analyzed) world debugging would be a last resort. I use print statements to contextualize myself far less often in a TypeScript or Haskell codebase than I do in say, a Python codebase where the input/output types of a function are not always documented, even at the library level.

                1. 1

                  It this because the don’t understand debuggers or because they have “transcended” them?

                  1. 4

                    It’s because they find debuggers cause them to focus on the small problem of what is happening instead of the big-picture design problem that resulted in the bad behaviour. I think

              4. 32

                In distributed systems printf debugging is your only option. “JuSt AtTaCh A dEbUgGeR” only works when you have the program running on your local development machine, frequently reality and facts will have you diagnosing faults on a machine running on the other side of the planet. Debuggers don’t really work there.

                1. 13

                  Erlang (and thus Elixir) have some nice debuggers where you can attach to remote nodes.

                  If you’re able to recompile/redeploy with print statements to your remote machines, surely you could enable debugging on an open port (depending on the language and/or frameworks)?

                  1. 8

                    The point is that in distributed systems when happens is more important than what happens.

                    If you step through the program manually you add delays that the program will just crash on, which is equivalent to doing a single print in a whole run.

                    If you think that playback debugging will save you then you’re only right in the case that the hidden state during execution is small enough to be practical to record. In data analysis the intermediate data is often many orders of magnitude larger than the input and the out put which makes playback debugging impossible on regular runs of the program.

                    The biggest advance in debugging in my career was when I started doing a s/print/log/g on the region I was working on once I found the bug and then putting it under debug level of logs.

                    1. 2

                      Distributed systems certainly introduce a lot of failure cases where a debugger won’t help you, but it doesn’t eliminate the cases where it could. I frequently find myself frustrated at the fact that the ability to attach a debugger remotely to a process seems to be considered an unnecessary extravagance in most environments, in light of the many cases I’ve experienced where it would have been a much faster way to solve the problem than using print statements. Obviously it requires a thoughtful security policy to be able to safely offer that as a feature, but I think it’s a worthwhile investment.

                    2. 3

                      BEAM debuggers are just dynamically added prints within running system, mostly beacause there is no other way to do so, as step-debugging makes no sense in such environment.

                      1. 3

                        Not when the machines belong to a customer and the problem only happens in production.

                        Or rather, the turnaround time and expense for getting that sort of access can be prohibitive compared to making them a custom build with extra logging.

                        Come to think of it, what’s the dividing line between printf-debugging and logging? I use the same API calls for both, and most of the printf-based debugging I do is by turning on extra layers of logging that are already in the code for this purpose.

                      2. 8

                        “JuSt AtTaCh A dEbUgGeR”

                        At the risk of going off-topic, what’s the meaning of that capitalization? It messed up the way my screen reader spoke that phrase.

                        1. 13

                          It communicates mocking sarcasm. Like you’re repeating what someone says in a goofy voice. It’s a somewhat-recent meme, I started seeing it around 2016.

                          1. 1

                            i often wonder what that “goofy voice” sounds like in other peoples’ heads. use of that spongebob-case kinda smacks of 1997 when people used the “R” word indiscriminately and mocked neuroatypical behavior/speech patterns.

                        2. 4

                          To be fair, he literally says that

                          There are many reasons why print debugging is still the best option for many developers. rr, Pernosco and similar tools can’t even be used at all in many contexts.

                          (And imho he doesn’t say it in a way that makes it sound like a superficial “why don’t you just..” comment at all).

                        3. 23

                          Some of them seem to have concluded that the superiority of print debugging is some kind of eternal natural law.

                          I just don’t understand how someone could point out the flaw of being one-sided, but then turn around and be one-sided themselves. Like, isn’t it possible that maybe printf debugging shouldn’t go away, but also, we should improve debuggers?

                          1. 2

                            The author explicitly states that he doesn’t think that print debugging shouldn’t completely go away:

                            There are many reasons why print debugging is still the best option for many developers. rr, Pernosco and similar tools can’t even be used at all in many contexts.

                            […]invested accordingly we could make enormous strides, and not many people [emphasis mine] would feel the need to resort to print debugging.

                            It seems that his position is “we should improve debuggers such that they’re the standard tool and print debugging is relegated to edge cases” - which seems quite reasonable to me, and not very one-sided.

                            1. 1

                              The first quote is referring to the current state of the world. That doesn’t seem relevant to what the author is suggesting. It’s saying, “no, I understand printf is fine to use in a lot of circumstances today, but that shouldn’t be the case.”

                              I mean, the title is their central thesis, “print debugging should go away.” That’s about as one sided as it gets. And the article reinforces that. Hedging to say “maybe there are edge cases where printf is the best” is also absolutely one sided. They aren’t edge cases. And they vary from person to person and workflows. Many of them were mentioned right in this very thread.

                              The article is also condescending:

                              The superiority of print debugging is contingent and, for most developers, it will end at some point (or it has already ended and they don’t know it.)

                              As if I use printf merely because I’m ignorant. Please. Give me a break.

                              1. 1

                                You’re right, the author was presenting their point in a one-sided manner - I was conflating “one-sided” with “incorrect”, which is on me.

                          2. 10

                            Print debugging can produce logs that I can view after the process in question is long dead and gone. Nothing wrong with richer debugging tools, but there’s no need for it to be an exclusive relationship.

                            1. 7

                              This is also true for the debugging tools he talks about (rr, pernosco). They create a recording that can be stepped through interactively.

                              1. 1

                                Print debugging also works in (pretty much) all programming languages while these discussions usually center on C(++), GDB and their ilk.

                              2. 6

                                Record-and-replay debuggers like rr (disclaimer: I initiated it and help maintain it)

                                Pernosco (disclaimer: also my baby) and other omniscient debuggers go much further.

                                Folks, everybody talks their book. It seems pretty natural that, whatever their other arguments, somebody selling debuggers is going to say that print debugging is obsolete. Like, this is pretty critical context when evaluating their claims.

                                Re: the actual article itself…good debuggers can be super handy. I loved and still love Visual Studio (at least the older versions) for their excellent debugging and analysis tooling. I still set conditional breakpoints and watches in frontend JS code. In some cases print debugging was simply out of the question and tools like RenderMonkey gave me the power to solve problems I couldn’t have dreamed of. I support all of these uses.

                                But sometimes, you want to see what a program is doing without interrupting its flow–maybe you don’t want to pause threads, maybe you are looking for subtle timing behavior, maybe you have an issue with the tooling that explicitly breaks the debugger, maybe you’re helping a client debug things and can’t attach a debugger, or maybe you’re just lazy and it’s faster to throw in a print statement. All of these are valid reasons for using print debugging.

                                Different tools for different jobs–and anybody telling you otherwise is, like the author here, probably trying to sell you something.

                                1. 3

                                  Your point that he started a company that sells a debugger so readers should adjust accordingly is well taken, but:

                                  But sometimes, you want to see what a program is doing without interrupting its flow. (snip) All of these are valid reasons for using print debugging.

                                  The unique selling point of his debugger is precisely that it does not interrupt flow of debugged programs! The argument here is that it’s not a valid reason to use print debugging, because debuggers can and should be improved. For other points you raised, rr has a mode that tries to increase frequency of subtle timing bugs which is often successful. So if you are looking for subtle timing bugs, you also really should try debuggers.

                                2. 5

                                  If I want to debug what’s wrong with a specific piece of complicated code: debugger (if available).

                                  If I need to understand if something is working as i want it, printf.

                                  1. 5

                                    I actually agree with a lot of this article. And I say that as a heavy user of printf debugging myself. But there is one additional feature of print debugging that he doesn’t really cover and is a part of what makes it so useful. Low startup cost. The cost of adding a print statement while I’m reading the code is minimal and it doesn’t break my flow. Adding the appropriate breakpoints, stepping through code, all of this has a higher start up cost for me. I’ll probably only do it when the likely payoff is high. i.e. when I feel like I’ve zeroed in on the actual source of the bug and now need to really build up the statemachine in my head. When I’m in debugging explore mode though. The payoff is uncertain and the startup cost too high so I just add a bunch of printfs and run the code.

                                    1. 5

                                      For better or worse, I’ve found myself moving towards print debugging in the past few years, though not entirely on purpose. For two main reasons (I think):

                                      • I used to write Lisp more often. You almost have to go out of your way not to use the debugger in Common Lisp, since on errors (incl. failed assertions) you get dumped into a debugger with interactive restarts by default. None of the other languages I regularly use do this (e.g. Python exits and prints a backtrace).
                                      • If I’ve isolated a bug using print debugging, it’s easier to share the results in a GitHub issue. I can just paste in code with stuff like print(f"Expected {foo} here, got {bar}") sprinkled in, and other people can both run it themselves and tend to understand what I’m trying to show. I don’t have an equally easy/reproducible/understandable way of sharing debugger transcripts with other people.

                                      The 2nd feels like a solvable (but not trivially solvable) issue though.

                                      1. 2

                                        Pernosco (the debugger the author is selling) pretty much solves the second problem. You can link to your debugging session with transcript that is completely reproducible.

                                      2. 5

                                        I see a lot of talk about the clickbait title but looked at the tools this guy is pushing? It is basically a hyperlinked printf of all state changes of all variables of the program execution. And it has a tree of values that contributed to each state change. It’s like it printf’ed everything and has a nice viewer.

                                        1. 4

                                          I’m just astounded at how people try to take sides in a thing where it’s like… you have two tools! You can use debuggers sometimes and print debugging other times!

                                          The easiest-to-understand example where a debugger underperforms compared to a print debugger: I have a list of data, and there’s some field in the items that is weird, but I don’t know which one.

                                          Printing out a nicely-formatted table of all the elements (with fields I care about) and then having a table to look at is going to take a hell of a lot less time than like… mousing over shit or the like.

                                          1. 1

                                            Printing out a nicely-formatted table of all the elements (with fields I care about) and then having a table to look at is going to take a hell of a lot less time than like… mousing over shit or the like.

                                            I don’t know what debugger you use but in gdb that’s just p nameofthevariable. With non-standard data layouts that would need a custom printer for print-debugging, you can do call the_pretty_printer(nameofthevariable).

                                            1. 1

                                              right, I could write that into the debugger N times (cuz in the process I’m calling this code path N times), or I can just write the code once and run some overall script.

                                              Full disclosure: I’m usually the “teach people the value of pdb and friends” person at work, so I value rich debugging a lot. Just that I also hit a lot of cases where dumping to stdout (and also writing the debug statements once into the code instead of over and over in the debugger prompt. Yes I know about debugger history) is going to be easier. Just gotta make the call on the ground.

                                              (recent case for me: debugging some business logic, where one function reads like 4 different properties on an object. I had a list of them, so I spent a minute building out a clean table printer for those, so I could just quickly get a text file with a bunch of use cases and results. This worked well cuz I knew the properties that mattered, knew that I would want to look at a lot of cases quickly, and had an easy entrypoint, though.)

                                              1. 6

                                                I could write that into the debugger N times (cuz in the process I’m calling this code path N times), or I can just write the code once and run some overall script.

                                                You don’t have to do that though. GDB lets you run commands when breakpoints are hit, e.g.

                                                command 5
                                                print myvar
                                                continue
                                                end
                                                

                                                Will make gdb print the content of myvar everytime breakpoint 5 is hit and resume execution until the next breakpoint (which might be break point 5, which will activate the command again, or another breakpoint, which will then stop execution).

                                                Debugger-debugging is a strict superset of print-debugging.

                                          2. 4

                                            I strongly agree.

                                            Print debugging is awful for me, and it feels like Stockholm Syndrome. The debugging tools often either aren’t good enough, or are a pain to use. In saying that though, I think what may be more common is simply that programmers/engineers have spent the time to learn them (so they may not be as hard as feared), and print is bearable/good enough for what they’re doing.

                                            A comparison may be a REPL development loop, versus a slower compile, run, loop.

                                            1. 3

                                              Two places where print debugging reign still supreme….

                                              • Post fact debugging of production code failures (aka logging).

                                              • Real time behaviour where sitting on a break point destroys what you are trying to debug.

                                              • Refactoring. Refactoring should not change behaviour… diffing the logs can instantly show where it has.

                                              1. 1

                                                record-and-replay debuggers can debug real time behaviour just fine. It doesn’t sit on a break point.

                                              2. 3

                                                Once you buy into omniscient debugging a world of riches opens to you. For example omniscient debuggers like Pernosco let you track dataflow backwards in time, a debugging superpower print debugging can’t touch.

                                                Pernosco can trace dataflow backwards in time, from where a variable has an incorrect value back to where that variable was set — instantly. Clicking on a displayed value opens a “dataflow” view showing where the value came from.

                                                Purely functional transformation languages such as XSLT have had this ability for a long time. I miss being able to click (in oXygen XML) on an element in a XSLT file and see the source data and the intermediate transformations it went through.

                                                I look forward being able to do this again with a (FLOSS) Pernosco-for-Ruby.

                                                1. 3

                                                  Sometimes it just gives me precisely what I want. There is a lot talk on what’s somehow better, but to me it’s more a decision like what algorithm I use and not what I am a fan of. Printing is amazing for having lists of something (actions or today I was looking into parsing a spreadsheet), while other approaches emphasize state.

                                                  1. 3

                                                    Legit curiosity… do “they” even teach debugging?

                                                    I ask while I prefer printf I recently used a data watchpoint (under a “real” debugger) to find a bug. Explaining it to colleagues, they asked “what’s a watchpoint?”

                                                    A just a datapoint for those wondering why I prefer printf… the answer is simply that it doesn’t have any of the friction involved with better tools (which in the embedded space can be flaky, unreliable or outright misleading).

                                                    1. 1

                                                      I went to school for CS for 3 years and not once in that time was using a debugger ever taught. My CS101 professor honestly probably didn’t know to use one, and most other professors considered squashing bugs an “implementation detail”.

                                                    2. 3

                                                      “There are many reasons why print debugging is still the best option for many developers. rr, Pernosco and similar tools can’t even be used at all in many contexts.”

                                                      So no. Print debugging should not go away.

                                                      1. 4

                                                        Print debugging should go away whenever better option is available, and we should invest more resource so that better option is available everywhere. It doesn’t seem that controversial.

                                                        1. 1

                                                          The article explicitly states that it shouldn’t, just a few sentences later:

                                                          invested accordingly we could make enormous strides, and not many people would feel the need to resort to print debugging.

                                                          …so it’s pretty clear that “should go away” means “should become much less popular”, not that it should disappear entirely.

                                                        2. 2

                                                          I look at print debugging and conventional debuggers as serving different purposes on different axes: printing shows you a cut across time at the same point(s) in the stack; step debugging shows you a cut down your stack at point(s) in time. (I once made this observation to Manishearth, who joked that they are fourier inverses…)

                                                          The article shows some videos of how debuggers can be made more clever do a bit of both. The specific example feels contrived - rarely am I looking at a single line of code and wondering “how did the state vary each time I hit this line?” More often I have a bunch of related functions and I want to look at what order they were hit, with certain states, all in chronological order so I can scroll through and think. I can imagine a debugger that would let me do exactly what I just described - but the process of programming it would be nearly the same thing as inserting print statements in the original code.

                                                          1. 3

                                                            More often I have a bunch of related functions and I want to look at what order they were hit, with certain states, all in chronological order so I can scroll through and think.

                                                            Pernosco lets you do this - you can get a list of all the times a function was called and what state it was called with, bookmark the ones that are interesting, repeat for as many functions you are interested in and end up with a nice list of everything you need.

                                                          2. 1

                                                            I use a debugger to do printf debugging. It’s nice. You can start with an interactive break point and upgrade to printing. You can print things like registers and stack traces that can be harder/more awkward to do in code. You can toggle particular prints on and off at runtime. You can trace calls to a function etc. You can send the output to a separate log. I very rarely single step interactively. You can do this with gdb or windbg, I haven’t used lldb in anger but imagine it is programmable as well.

                                                            Automated debugging is a powerful technique, and the automation means you can more safely use it in production, it’s not responding at human scale.

                                                            1. 2

                                                              Do you consider logging to be a form of automated debugging? I’ve found that adding what many feel to be “too much” logging to production systems has made solving and repairing many bugs much easier.

                                                            2. 1

                                                              I don’t think there are that many people who think that printf debugging is always better than debugging using a debugger. I think it’s more like when I’m trying to debug and I set a breakpoint, then the debugger doesn’t stop from some reason (relocatable code, debugger bug, some weird security issue that prevents doing something). In this case I can just fallback to printf debugging to work on the actual problem, because otherwise I would need to start fixing the other issue with the debugger first.

                                                              Or, the problem I have pretty often is that lldb on macOS doesn’t always work when stepping over to another instruction (step instruction simply runs the program and doesn’t stop) when debugging a code without debug info that was dynamically loaded by a custom code loader. In most cases I prefer using printf, and sometimes I only insert ‘int 3’ to force break the debugger, because mostly the debugger simply doesn’t work as advertised, and in cases where it actually works, then I can’t use breakpoints because the offsets are different every time I execute the code.

                                                              From my experience, the debuggers work only in pretty straightforward cases where the source code is available, debug info is available, we’re running the program on the same machine that we’re developing it. But more complicated scenarios, starting with remote debugging (old but still clunky concept), can show that most of the debuggers are actually pretty limited in their functionalities and printf debugging is simply faster to set up and use.

                                                              1. 1

                                                                The use of “should” should go away; presumption that one’s insights best those of others is the hallmark of collective stagnation.

                                                                1. 2

                                                                  It’s very … a certain kind of developer, isn’t it.