1. 31
  1.  

  2. 18

    the answer is easy[1]: delete from issuetracker.tickets.

    No matter the reason - packrat PMs, slow developers, business demands coming in without pause - the data in the issue tracker is worthless as soon as it aged three months without anyone caring.

    The best course of action seems to be to clear the deck and start anew. At least that way the indecision paralysis from having the infinite issues won’t bite, and it’ll be easier to refocus the business on solving the big / immediate issues.


    [1] but may not be necessary or sufficient to solve the problem

    1. 7

      I’m the author of the essay. You’re spot on, and that’s both what I did in the real project, and the topic of the next one I’ll be writing.

      The tricky part is getting buy-in, of course. But understanding what the benefits are of taking this approach definitely helps, and knowing how to execute it carefully and make good use of the feedback it gives you… is pretty important. (Although executing it poorly is probably better than not doing it at all)

      1. 9

        Looking forward to your next one!

        Yeah, I can imagine that “easy” undersells the difficulty a bit. I’ve never managed to convince anyone to ditch the death tracker, but I’ve tried. Nothing makes me want to use the ticket system less than that graph.

        At my last job I was lucky enough to be in a position outside of the standard product slog (release toolchain/platform development), and our team decided to go #NoBacklog - we figured, rather than bury tasks in a bucket to never be seen again, we’d either:

        • solve the problem immediately, even if it meant popping something else off our stack
        • reject the idea right away
        • postpone it until enough momentum lined up behind it

        The last one was the nicest - if the devs wanted something bad enough, we figured telling them “put a bug in the tracker” was counterproductive; instead, saying “flesh the feature out with your team lead & a few senior devs and bring it back to us” was more likely to see us work on useful ideas. And it worked very well - the senior devs were a great filter, and the ideas they had were much easier to implement.

        This was definitely an artifact of having a small team work on fairly stable tooling and having only indirect business pressure, so I’d be crazy to claim that it’s some universally workable approach, but it sure was liberating to… just not worry about the constant feature request noise.

      2. 2

        What problem is deleting the informations solving? The graph shows that issues are outpacing fixing, deleting the information doesn’t change that.

        1. 4

          It saves lots of time in the “triage” discussions that often happen as a result of having thousands of open issues. I remember spending many, many hours going through bugs that were still valid, only to defer them to a later release, and then defer them again the next time around. Admitting that they’re simply not going to be fixed is difficult, but definitely results in a weight being removed from one’s shoulders.

          1. 2

            That isn’t what the graph is showing though, the graph is showing that the slope of bugs is higher than the slope of fixes. Resetting it doesn’t mean the slopes are now the same. Deleting that information doesn’t fix the problem.

            1. 4

              I didn’t say it fixed the underlying problem. But in my experience, when there’s already an overwhelming number of issues, getting rid of the ones that history has shown will simply not get fixed, is a good first step, a kind of “issue bankruptcy” which can allow folks to focus on what’s going on right now. Otherwise, getting 100 new issues filed in a week simply becomes more of the same.

              There’s still the underlying problem of why there’s so many new problems being discovered. It could be that they’re not really new, but that the issue creation is part of the problem (not enough vetting to ensure they’re actually new).

          2. 1

            I will be writing about this in the followup essay but briefly…

            If you make every ticket as stale and stop attempting to prioritize them or even review them, you can focus just on the new incoming requests and figure out what is happening.

            You take a look at the output (close rate) and the composition of the closed tickets (are they bug fixes? features? chores?) use this as a guide to how much new work can reasonably be done in a given time period.

            You begin to prioritize new work based on that information, and if you’re working solely with new reports, you know by definition the stuff you’re working on is still relevant.

            In practice, you’ll see bugs clustering around areas with high defect density, and also feature requests that go unreasonably slow because they’re being implemented on top of specific components that are very difficult to work with.

            As you attack these areas, the close rate will go up, and because scheduling has been shifted to be limited by the outputs and not the inputs, the two lines will get closer to parallel.

            The most important effect here is to prevent the accumulation of things that have been discussed, planned, attempted, etc. but not shipped. In the actual team I was working with, there was a lot work-in-progress code that had time sunk into it but then got put down every time there was a new emergency. Knowledge decays every time you do this, creating more defects when things do ship, and then resulting in a graph like the one in the essay.

            1. 2

              How often have you actually seen teams follow this through to the end? While I agree, at some point you have to make a clean break, what you’re proposing just seems complicated and unlikely to be followed through with, especially in an organization that has already let things get so bad. IME, the most effective way up a team’s productivity is to get some fresh, experienced, blood into it.

              1. 3

                As an outside consultant, I’ve pulled this off a couple times before. And as an open source maintainer of a project with tons of users and a hundred contributors, I’ve done it as well (PrawnPDF).

                But this is a really tricky problem for sure. You need trust, you need experience, and you need to be able to move very quickly at an organizational level to make this sort of change. And though this is the first step, there are a hundred things you need to do before it’ll work, and a hundred things you need to do after.

                I specialize in working with organizations that have for some reason or another, gotten themselves into deep trouble. I’m hoping to be able to help others figure out how to do that, because even an improvement from a 1% success rate in dealing with this sort of crisis to a 10% success rate industry-wide would be a huge win.

                Expect to see more specific details in the next essay, and more generally in all the essays I’ll be writing over the coming months. In my experience many developers know what’s wrong and even how to fix it, but don’t have the necessary experience to justifying things in clear business terms – and that’s what is needed to make big changes.

                I want to see if I can help with that.

          3. 2

            In a previous job I used to periodically close all tickets with no activity in 3 months. This was at a bank and “first to market” was very important to the traders we supported. When they made a feature request they often wanted it right now or not at all. The things they did want they would continually pester us about, but they never bothered telling us if something on the backlog was no longer important.