1. 33
  1.  

  2. 29

    I think one of the biggest “secret” is the debugging technique or method. It is kind of similar to scientific method (hypothesis, test, evaluation of result), but never explicitly described, it is just something you have to pick up as you go. That really separates those who can program anything and those who can’t.

    1. 13

      I would agree with this, and specifically the part about generation of hypotheses about what might be wrong, and how to specifically test and evaluate then, and if they’re wrong, reject them and come up with a new one afterwards. It’s hard without experience to generate hypotheses when there’s no hints and nobody who knows more than you about the problem. It also seems to be hard to be objective about your hypothesis, seek to prove if it’s right or wrong, and reject it if it’s wrong. It’s these things that you can’t learn by reading about them.

      1. 8

        It is frustrating.

        We have an entire Internet or two of “programming tutorials” that frequently leave out all of the problems and mistakes that the author made while trying to write it – believing perhaps this makes them seem like less of an expert. I’d like to see more things like this gem (see the mistakes at the bottom).

        We also have a computer science curriculum which still seems to pretend (at least at the beginning) that instructions are equal and memory is fast and “teach” binary-trees and probed hash tables as “data structures”. But whatever. How do you know this hasn’t degenerated into a linked-list? Debugging seems remarkably absent any CS curriculum being single stepping, mental simulation of an algorithm, and printf.

        However I don’t think the scientific method is quite right. Peirce believed that too much rigour (or as he put it “stumbling ratiocination”) was inferior to sentiment, and that the scientific method was best suited to theoretical research. For more on this subject, see the Pragmatic theory of Truth, but a little background for my argument should be enough: Peirce outlined four methods of settling an opinion:

        1. Tenacity; sticking to ones initial belief brings comforts and decisiveness, but ignores contrary information.
        2. Authority, which overcomes disagreements but sometimes brutally
        3. a priori – which promotes conformity less brutally but fosters opinions as something like tastes. It depends on fashion in paradigms, and while it is more intellectual and respectable it sustains accidental and capricious beliefs.
        4. The scientific method, which obviously excels the others by being deliberately designed to arrive – eventually – at the most secure beliefs, upon which the most successful practices can be based

        Now we still see a lot of “programming wisdom and lore” which people do because they always have, or because some blog said so. I’d argue syntax-highlighting and oh-my-zsh are fashionable, and laugh at anyone who believed that the scientific method could demonstrate these tools are ideal.

        So what then? Well, it means we have pseudo-science in our programming.

        It’s for this that I remain that we (as a society) don’t know how to program computers – let alone teach anyone how to program (and therefore debug). I predict this will mature over the next couple hundred years or so, but I don’t expect anyone in my lifetime to be able to teach programming itself, the way, for example, we can teach bridge-building.

        1. 1

          “I don’t expect anyone in my lifetime to be able to teach programming itself, the way, for example, we can teach bridge-building.”

          We’ve been doing it a while if you keep the structuring simple. The first is an iterative method for doing that with low cost that combines things like lego blocks. Even students get low defect rate on quite-maintainable code. The second adds formal specifications and verifiable code to drive predictability up and defects further down. The third combined with error-handling techniques and automated testing is pretty good at dealing with stuff too complex for the rest. So, I’d say we can do quite a bit of what you describe but it’s mostly just not applied.

          http://infohost.nmt.edu/~al/cseet-paper.html

          http://www.anthonyhall.org/c_by_c_secure_system.pdf

          http://www.eiffel.com/developers/design_by_contract_in_detail.html

          1. 2

            I’m not sure I understand what you’re saying. Maybe you don’t understand what I’m saying.

            Clean room doesn’t help a programmer understand what’s wrong with:

            memcpy(a, b, c*d);
            

            All of these examples advise not putting bugs in the first place – sensible advice to the novice, for sure, but how exactly does that teach us how to debug programs?

            What constitutes a bug in the first place? Eiffel sounds great not having any bugs in it, but what exactly does that mean?

            Is “DISK 0K” a bug? There’s a wonderful story of tech support getting a report that I’m getting an error message about my disk being full, but the computer says it’s ok.

            If you’re saving a big (multi-gigabyte) CAD drawing to disk and run out of space ten minutes in, should the system generate an error, telling you to quit and delete some files and try again later? We use multitasking systems, so why not pause and give the user the option to retry or fail? They can delete some files if they want to…

            What about an email server? What if it runs out of disk space? Should reject a message? Could we still pause, let the client timeout while we page the sysadmin/operator?

            Writing software is in part, being able to say what you mean (implement), and part being able to mean what the business means (specify), but it’s also clearly (still) a matter of taste because we don’t have good science to point to giving us the answers to these questions.

            In contrast, have you seen bridge engineering handbooks? Pretty much every consideration you might have when you need to build a bridge is documented and well researched in a way that makes software development professionals look like lego builders.

        2. 2

          Two great talks on the topic of debugging:

          Stu Halloway on Debugging with the Scientific Method

          Brian Cantrill on Debugging Under Fire: Keep your Head when Systems have Lost their Mind

          What I find interesting is that, depsite their different backgrounds and presentation styles, their advice has a lot of similarity.

        3. 7

          Intuition about where problems might be or code that might cause them is something I think comes from experience. After a lot of years you’ve just seen a lot of bugs and know the type of thing that causes them.

          1. 5

            People in the high-assurance and embedded communities probably do. Maybe some in gaming, too. In these fields, a lot of important information is unpublished, silod in places that might be hard to find, or yet to be discovered. Experienced practitioners will pick up techniques that would surprise people who haven’t done much experience. It’s one of the reasons I like reading Ganssle’s The Embedded Muse even though I’m not an embedded developer. In the past, I learned about lots of optimizations from game programmers, too.

            http://www.ganssle.com/tem/tem333.html

            http://www.ganssle.com/tem-subunsub.html

            Here’s two examples where one covers how they safely do updates and another uses an analogue multimeter to find where programs are getting stuck in loops on systems with little instrumentation. I’d have never thought of the latter being a software developer.

            http://www.ganssle.com/tem/tem288.html

            http://www.ganssle.com/tem/tem310.html

            1. 3

              uses an analogue multimeter to find where programs are getting stuck in loops on systems with little instrumentation

              Oscilloscopes are much more useful for that. They provide a screenful of history with zoom, a good time resolution (nanoseconds, typically), multiple inputs (about 4 analogue and 8 to 16 digital) and triggers (stop scanning after a particular pin is pulled up/down). I’ve used scopes to debug timing issues, events, CPU wake-up procedures, look at oscillator start-up times. A quick web search brings up this short intro.

              Another useful tool is a logic analyser. I’ve used Saleae Logic, which comes with software that understands wire protocols (e.g., UART, SPI, I²C), has more history than a scope and can save the data in various text-based formats. Other popular ones are Bus Pirate and GoodFET.

              One downside of these tools would be the price. While a basic multimeter can cost around 20€, Saleae Logic 8 would be around 200€, and a reasonable scope will set you back several thousands.

              1. 1

                Thanks for the tips. Several people that do embedded told me I should look into a scope or logic analyzer for debugging. One said digital, logic analyzers are easy to learn. You got a good intro or book recommendation on learning how to use these things?

                1. 2

                  I don’t have any particular recommendations, but Saleae Logic software is quite intuitive. You connect the ground pin to the ground on the board and other pins to the lines you want to scan, choose the bandwidth and start scanning. Then you can assign functions to specific pins (e.g., pin 0 SPI data, pin 1 SPI clock) and parse the protocol. It also helps to read up on the protocol in question beforehand; Wikipedia is usually a good start. The harder part is knowing what you want to scan and why.

                  And to debug software, just choose some unused GPIO pins (easier on development boards that have all pins exposed than on real hardware), set them as output and pull them high or low at specific points in your code. (In the link you provided, these are lines like “RA4 = 0;”, but APIs differ.)

                  Using oscilloscopes is a bit more tricky. I learned how to use them from hardware people I work[ed] with, so I don’t have any pointers here either.

                  1. 1

                    I appreciate the tips.

            2. 5

              Third, the best way to get better at writing code is to write code.

              I would recommend reading over writing. Reviewing other people’s diffs can be enlightning. Good team players write great diffs (or make great commits) and I’ve rarely seen this covered in programming books.

              1. 2

                I’ll add that software spends most of time in maintenance mode. Also, most of the most important software is already in maintenance mode when you get there. Being able to read and refactor code will be a valuable skill for those reasons. Plus, the tech for writing new code well versus fixing old code is usually quite different in thinking style to derive it and implementation. More people reading code might lead to more people making the tools good for maintaining or fixing it. I think this already happened with larger projects building a lot of that as a side effect just to make the job easier.

              2. 3

                Another thing (at least for me) – besides debugging, which was already mentioned – is defensive programming. I’ve worked on a number of libraries written in C, and at some point you develop a sense for when you need to ask yourself “how could this go wrong?”.

                I know unit testing should solve most of these problems, but I think fuzzing has thoroughly shown this is not the case. Experience can help you predict at least some ways in which code could break. I guess to some extent that experience is related to that of debugging these same issues.

                1. [Comment removed by author]

                  1. 2

                    At work you may only get to write Java server code say, but at home you can write Haskell, C++, machine learning, games, Pi projects, a custom Yocto OS, etc. Even if it within the same language, you might only get to write C++98 at work but at home you can rule the roost with C++17.

                    Working on projects at home isn’t a necessity, but not working at home can be an anti pattern if you are also not doing anything exciting at work, or not learning anything new, or stuck in the past.

                  2. 2

                    This is at least partially the “craft” part of programming. There’s a special sense and taste that you develop after years of programming that you can not put into words and you can not learn from the books.

                    It’s not at all unique to programming either. It’s like becoming a KungFu master or master chef and things like that.

                    1. 2

                      I do not think that it cant be put into words. But it is devilishly hard to do so. Its also very worthwhile. I often find that If I fight to get one of those hard to describe things on to paper. I learn something very valuable in the process. and I understand what it is I subconsciously know better.

                    2. 2

                      The balance between inflexible and overengineered.

                      Choosing a programming language, framework, or library.

                      When to use which error mechanism, like return code vs assert vs exception vs panic vs option type.

                      Knowing about the layers under/inside your platform. For example, knowledge about garbage collection if you work in Java or Linux knowledge if you work in C.

                      1. 2

                        Do you work in a team?

                        Do you end up cleaning up other programmers mistakes?

                        Then stop keeping secrets and give everybody a leg up….

                        …you will often find when you do, someone will find(or knows) a way to improve your “secret”.

                        1. 2

                          Of course, mostly old coders “secrets” are secret, because…. they’re embarrassing!

                          http://www.dodgycoder.net/2012/02/coding-tricks-of-game-developers.html

                          1. 1

                            Having a broad familiarity of concepts, protocols and applications is something that can only be known with experience. It makes a big difference dealing with a problem if large parts of it are not new, vs everything being new and unknown.

                            An example would be when someone says they can’t get an IP address and your mind can jump back on all the past times you have configured dhcp, debugged networks etc etc to draw on past solutions.

                            1. 0

                              So I guess the answer is “no”.

                              1. 4

                                No to the ‘secret’ part, but yes to the [frequently] ‘learned only by experience’ part.