1. 3
  1.  

  2. 3

    My favourite way to assess/compute priority, which is far simpler than what is proposed here, is to let anybody pledge to pay whoever fixes a bug any amount of money they want to and an expiry date for their pledge. When somebody claims the bug, the money is collected from all pledgers and kept in escrow until either the expiry date arrives, at which point the money is returned to the pledger (but may be repledged with an extended expiry date at the pledger’s behest), or the bug is fixed and verified, at which point all pledges are released to the person who fixed the bug.

    This provides a very simple way to sort open bugs by total bounty or to render a graph showing when pledged bounty for a given will expire. It also means that anybody has the power to increase the likelihood of a bug being fixed by increasing their pledge amount. Kinda like a reverse kickstarter!

    1. 1

      This sounds cool, I just wonder if it works in a closed-source environment which many companies are working in. Also, are there mechanisms to prevent those with the most money calling all the shots? I guess it kind of minics real life :P, but seeing some kind of pledge threshold seems like it would make this a bit more fair if an unscrupulous actor were introduced.

      1. 1

        No reason it couldn’t. Pledges would just have to be made by product managers. Not sure how one could be unscrupulous if real money is involved…

      2. 1

        That sounds pretty cool. What projects is this implemented on?

        1. 3

          Many projects use https://www.bountysource.com, which seems to essentially be what was explained above.

          1. 1

            I actually didn’t know about BountySource but it is in fact pretty much what I described :-)

      3. 1

        Back in the 1970’s, it was realized that the most important bugs are those the user will see. The perception of product’s quality is more importang than actual quality. Next, errors that can lead to serious availability or integrity issues. So, that dictates priority for me. I’d fix what the user sees that’s severe, what’s uncommon but severe, what’s visible and aggravating, what’s uncommon and aggravating. This combined with a low-defect, development process will result in software that users perceive as greater than it even is. Brings in more money/use.

        A form of that, which inspired my work, was Mills' Cleanroom methodology from the 1980’s. Key things worth imitating:

        1. Precise way to specify requirements or high-level design.

        2. Ability to refine or compose in a simple, easy-to-analyze way.

        3. Simple constructs for implementation that can be verified by eye. These days often automated, too.

        4. Usage-centric testing that eliminated user-visible defects by generating test cases from models of how people used the software. Better keep those updated as use cases increase.

        This methodology managed to deliver low-defect code in about same cost and time as other stuff largely due to less introduction and removal of bugs. Modern version with languages designed for productive development of high-quality software woukd further improve on it. Design-by-Contract, if key invariants are present, can further help you prioritize by looking at what defects are breaking what invariants with some more important than others. Static and dynamic analyses also give lists of bugs where type and module location can help determine priority.

        Point being, these kinds of development methods introduce few bugs and help you find the rest easily. Makes dealing with bug priorities easier since it takes up so little time. ;)

        1. 1

          An apocryphal story from a friend who used to work at a decent-sized game studio kinda underlines the problem with priorities.

          They had a custom variant of Bugzilla, known internally as Hydra–presumably because of the many different facets and features it had accumulated over the years. Naturally, this system had a priority queue for bugs.

          Now, the problem with such queues is that they start at 1 (most important) and add levels for decreasingly important things (up to like 5 or 9 or something). This seems innocent, but very quickly you realize that the only way to get your issue prioritized is to give it a very low number. This would be fine in and of itself, but what if there are so many fires that the 0-level queue is completely full?

          Obviously, you need to go negative. By the time the project finished, they were supporting priorities up through -4.

          ~

          In my own experience, using post-its with a client and a board helped. The physical act of shuffling around the notes seemed to impart to them the idea that we only have so much manpower available and that if you want to add features something else has to give.

          I will also note that I have never once seen any of the documentation or tooling stuff that can act as a force multiplier ever put anywhere other than lowest priotity, unless I made it my personal mission to shove it there. We can all get lost shaving yaks, but more often than not something that helps everyone else do their jobs better is a good investment–unfortunately, it’s never an urgent investment.

          Watching the inevitable failures and fuckups due to ignoring such things is cathartic when you can point back to having mentioned them before. It won’t help anybody but yourself, but it has helped me suffer through bad management.