1. 23
    1. 20

      I agree with the thrust of this, but the choice of the troubleshooting example is pretty unfortunate, I think.

      Two engineers separately attempt to identify the root cause of the issue, but are unable to find the source of the problem after a week of investigation. A third engineer is brought in and is able to identify and resolve the issue within a few days.

      […]

      It may make more sense to ask the third engineer to train the other engineers on their issue resolution methodology.

      I’ve been that third engineer on multiple occasions and the problem is, in a lot of cases I’m not doing anything I could train anyone else to do even if they asked. In some cases even I don’t understand how I figured a problem out.

      “Train someone else to resolve issues the way you do” presupposes that I have a well-defined checklist in my head that I methodically proceed through until I get to the step that resolves the problem. For simple problems, that’s kind of true, and I can tell people what I’m doing. But for the not-so-simple kinds of problems that would cause another team to ask for my help in the first place, it’s more like a series of intuitively-guided attempts to match against different patterns of problems that have puzzled me over the course of my career. My intuition is usually decent and before long, I have an “aha” moment. Then the person I’m working with asks, “How the heck did you know that was it?” and I have no answer.

      If a manager comes to me and says, “Hey, can you take a couple weeks to train our team to pattern-match against your 35 years of professional experience?” it’s not going to be a useful exercise.

      1. 8

        I think it can depend on the specifics. Sometimes when I find bugs, it’s something I can’t communicate to other people. In other cases, it’s because I knew a tool or approach that other people didn’t. Like in one case I solved something quickly because I was the only person who knew how to load old database snapshots. Showing other people how to do that meant they were able to solve similar issues quickly, too.

        People also gather information differently, and watching how a more senior person gathers information can teach you useful tricks. Think things like git bisect, writing testing scripts, using a debugger, even recording theories in a notebook.

        1. 4

          In other cases, it’s because I knew a tool or approach that other people didn’t.

          There are many cases I’ve seen in my years of a bug taking a long time to solve because of lack of knowledge of tools that would make things easier. There are two main aspects to this that I’ve noticed:

          1. The person doesn’t know how to use the tool effectively, be it the knowledge of an option or configuration parameter.
          2. The person does not know about the tool at all.

          In the second case, it is usually accompanied by an ignorance of the questions that the tool opens up to you. If you don’t know that you can attach to a running process to debug it, for example, then you tend not to ask questions about dynamic analysis or profiling. (Admittedly, this is a bit of a crude case, but the point stands.)

          Teaching the first point isn’t so hard. Teaching the second one can be a bit trickier.

        2. 3

          I tend to use the technique that Raymond Chen refers to as ‘psychic debugging’: from what I know of the codebase, think about how I could introduce the bug, then come up with an experiment that tests whether this is actually how the big was introduced. The second half of this is teachable because it’s just a set of recipes. The first bit is probably something that you can learn only through experience because it’s the same skill as introducing desirable features (and if we had a reliable way of teaching that, the software world would be a much better place).

          To the original example, it’s very possible that two people using this exact process would take different (but not reproducibly better or worse) times to find the same bug. I’ve spent days chasing bugs in the wrong place before and I’ve also found a bugs in minutes that other people spent days failing to find. It would be easy to cherry pick one of those examples and say that I am amazing or terrible at debugging and should teach / learn from other people, but often they were following exactly the same process. In some cases, the slow and useless person was tired me, the fast person was well-rested me.

      2. 3

        Thanks for the feedback.

        I used the example because it’s something that actually happened - the third engineer in the example actually asked me if it was okay for them to apply “go slow to move fast” and if we could accept slower resolution times in the short term to give them an opportunity to train others, and it turned out to be a great success for us.

        I can certainly agree that it’s not something to be applied in all situations, and I wouldn’t ask someone to do this if they thought it wouldn’t be productive!

      3. 2

        You can’t teach a mental model, you can only develop one yourself through years of experience and learning.

        1. 2

          You can absolutely accelerate the development of a mental model with good examples, exercises, drills, and targeted training! It’s just that’s, uh, really hard to do and even harder to do at scale.

      4. 2

        But for the not-so-simple kinds of problems that would cause another team to ask for my help in the first place, it’s more like a series of intuitively-guided attempts to match against different patterns of problems that have puzzled me over the course of my career. My intuition is usually decent and before long, I have an “aha” moment. Then the person I’m working with asks, “How the heck did you know that was it?” and I have no answer.

        This has happened to me more than a few times. My only advice to my team members is to read a lot and read broadly. Just because something doesn’t seem like it is relevant doesn’t mean it won’t come up. Additionally, it’s important to understand the underlying architecture of projects, the fundamentals of environments, or the supporting theory behind constructs. If you only understand the API then your debugging tools will be limited.

    2. 4

      I’ve found the idea of going slow to move fast to be true in software generally, for example by fully learning an API and building isolated prototypes before writing production code that uses it, or learning how to use a debugger really well & setting it up in your environment instead of debugging via print statements, or taking the time to learn the features of your editor (an ongoing process, with vim). I also feel that daily standup as commonly implemented in corporate settings to be anathema to this, although struggle to articulate why. I suppose because it has a static view of developers and how long it should take them to do some work, when maybe they can take twice as long to do a piece of work while acquiring a skill and thus take 0.9 times as long to do several subsequent pieces of work, with the developer accruing less burnout. People will say this is an individual manager problem, and sure it is, but still professional learning tends to be thought of as extracurricular in most of the places I’ve worked.

    3. 3

      Regarding the tech debt metaphor, while it’s true that tech debt ought to be intentional, I think it’s not helpful to reduce the usage of this term to only include the intentional. Martin Fowler has a nice post on The Tech Debt Quadrant which includes “inadvertent” debt etc.

      In reality, tech debt can appear without you realising it very easily, and you are in the same situation as if you had deliberately chosen it. If you discover a new, far superior way of implementing something, you are suddenly in a tech debt situation. You have to decide “do I pay the upfront cost of a rewrite now to pay down the capital, or do I continue to pay the interest (i.e. unnecessary costs associated with the old way)?”. The debt metaphor can be used with any code that is less than ideal from the perspective of future changes.

      The same is actually true financially - you can own a building, which you suddenly discover has dry rot. Or, someone invents a new insulation technique that is being used on all new buildings, and could save you a lot of money in heating, but you’d have to invest to switch to it. Though not intentional, you do have a liability, and you have the same kinds of decisions to make about how to deal with it.

      1. 1

        I think this is where it goes off the rails - if someone sees a choice made by someone else as suboptimal (and it may well and truly be! But it might also just be preference for a different tool, or lack of deep understanding of the complexity of the problem the code is trying to solve), it would be debt under this expanded definition.