1. 23

I’ve noticed that for most popular projects, public trackers degrade to a graveyard with a rather low signal-to-noise ratio. As an example, I, as a maintainer of rust-analyzer, don’t find its issue tracker terribly useful for my day-to-day work. I use it mostly for logging new issues, not for looking at the existing ones :)

I also feel that both my personal knowledge and “common” knowledge about issue tracking is confused. We have a lot of tools in this area, they all are different and suggest various solutions, but, at the same time, I find it hard to understand what works and what doesn’t work.

So, what should I read if my intuition and experience fails me? Two contexts I am primarily interested in are:

  • 2-10 people team working on a popular OSS project where the issue tracker gets a lot of requests from (thousands of) users.
  • 2-10 people team working on a niche project, where most issues are raised by team members themselves.
  1.  

  2. 8

    I’ve struggled with an issue tracker in a company large enough to… have similar problems, and think that the normal books about software development in companies ought to help. Just think of “the public” as a really large collection of teams that use your service,and file bugs for it.

    As I saw it, the main problems were related to the dual nature of the issue trackers. One one hand it was where people outside the team registered things, on the other hand it was one of the team’s internal tools. So it was part of the interface and also part of the implementation, and the two roles posed different needs.

    For example, you may say that implementation tracker should track only issues for which there will be something to track, ie. things you won’t act on should be closed, no matter why you won’t. For example, our funding came from a few countries, and we had bugs that related only to other countries. But if the tracker is part of your interface towards the rest of the company (or the world) then it may be politically difficult to say “wontfix”, or worse.

    So on the day that you consider a new bug, closing or rejecting it may be politically inadvisable, risk getting into an extended discussion. If it’s in public, you may risk a shitstorm. But if you don’t close, your issue tracker grows a tiny little bit worse in its capacity as internal tool.

    I imagine this is worse for open source things, for which there is no lower limit to the quality of bug reports. I have seen “bugs” that were really design principles.

    1. 4

      As I saw it, the main problems were related to the dual nature of the issue trackers.

      Backstory for the thread: I had exactly this epiphany couple of days ago https://github.com/rust-analyzer/rust-analyzer/issues/10593#issuecomment-947537009. And, given how obvious is this in retrospect, I felt very surprised that I hadn’t just knew that as a part of common wisdom.

    2. 5

      Never used in anger, but I really enjoyed this: https://apenwarr.ca/log/20171213.

      1. 3

        For the “2-10 people team working on a popular OSS project where the issue tracker gets a lot of requests from (thousands of) users” context, I believe it’s common to have a rotating role on the team where a chunk of time is spent triaging the issues as the come in to keep things manageable.

        As one example that I know of personally, Element Web (a Matrix client) currently has about 10 new issues per day. They use various labelling and automation techniques to wrangle them as they come in.

        As another related example from a larger team, VS Code’s issue tracking and triage processes are worth a look as well.

        One caveat is that the above are both examples of popular OSS projects where a team is working on them as a full time job as part of their employment at the main company managing the project. That may or may not be the meaning of “team” you have in mind. A team made up of part time contributors may need very different processes.