In no small part because it reminds me of unfinished work, piling up behind me.
I think it’s important to drop those things you are least likely to do off the back of the cart, and let them lie. This piece didn’t seem to touch on strategies or other practices for improving the work, only pointed out that backlog is bad.
I recommend reading The Phoenix Project. It seemed to cover all this ground in a more compelling package.
This piece didn’t seem to touch on strategies or other practices for improving the work, only pointed out that backlog is bad.
That wasn’t my reading at all–the size of the backlog is a different issue than work-in-progress.
It’s totally fine to have a big backlog and be chunking it up into little bits and chewing through it–this is in fact one of the keys to getting a backlog back under control if you can’t just axe things entirely.
It’s similarly terrible to have a “backlog” of zero with a thousand things in-progress.
I really enjoyed the previous article in… what I’m guessing is a series, the one about long-term plans. This one is interesting but I think it’s missing a clear statement of the assumptions behind the model that it uses.
Specifically, some the ones I could infer, at least, are:
The tasks being performed in parallel are, if not identical, at least close enough as to be considered similar. (The article is effectively built on an analogy with a burger shop; if the burger shop also makes kebabs platters, salads, cotton candy, and milkshakes, the math doesn’t work out as well, unless you structure it as running independent burger shops, kebab shops, salad shops etc. – but this has a further problem:)
The tasks being performed in parallel are essentially independent and the way they are to be performed is known in detail, so the choice of whether to do them in parallel or not is solely one of scheduling, without any technical or human constraints (i.e. you never have to figure out burger - salad integration while working on the burger or the salad).
Tasks are “conclusively” finished, with negligible integration effort (i.e. you make a burger and you sell it – say, there’s rarely a need to re-adjust a half-eaten burger and adjust some of its ingredients so as to fit it on a seafood platter).
I think these are… pretty bold assumptions to make about software development in general. Maybe there are some specific aspects of software development where this is true – well-scripted procedures with little development effort involved (deployment, various migrations) or even to some development activities (some types of refactoring). But I’d be wary of taking this analogy too far.
The old-fashioned software developer in me is also compelled to point out that, in a professional setting, “finishing what you start” is productivity. If you have ten tasks that you started and none of them are finished, your team’s effective productivity is zero – even adjusting for the “move fast and break things” mentality, you can’t usually sell unfinished tasks. The only kind of unfinished software you can sell is the one that has some finished features, but not all of the announced features. You can start ten hobby projects at once and never finish them and it’s still a win, but when you have users, unfinished code – just like uncooked burgers – doesn’t count towards productivity.
Yeah, I mean just on the burger analogy, there is time when the meat is just sitting on the grill and you’re waiting for it to cook through. Any normal chef will try to arrange things so that the cooking time can happen in parallel with prep for the other ingredients. So the first thing to do is to get a patty, put in on the grill, then you find the bun and start toasting that as well, etc. If you watch the chef at a dinner like a Waffle House, there are pretty big gains to be made from concurrency, although obviously you reach a limit where you have to limit work in progress. Even there you can do things like have the tickets arranged physically so that the waiters can add to the queue independently from the chef pulling out of the queue. It can actually be a very deep analogy if you think about how that would apply to software management or program execution.
The most subtle reason teams switch tasks, in my experience, is a sub-optimal definition of “task”. Back in the bad ol’ days of manual testing and certification you could have shown this blog post to anyone and people would have nodded furiously along at everything in it. “Of course we understand basic queuing theory, we’re engineers. We minimize context switches and batch size.” But they were still spending weeks in their release process because:
The programmer’s definition of “task” was “get it into trunk”, and
The release engineer’s definition of “task” was “get trunk into production”.
The big shift in mindset that happened in the 2000’s was to realize it’s all the same task, to stitch together these workflows and vertically integrate them.
My lesson from this story is to constantly ask myself, “where else is division of labor causing muda?”
I didn’t enjoy this.
In no small part because it reminds me of unfinished work, piling up behind me.
I think it’s important to drop those things you are least likely to do off the back of the cart, and let them lie. This piece didn’t seem to touch on strategies or other practices for improving the work, only pointed out that backlog is bad.
I recommend reading The Phoenix Project. It seemed to cover all this ground in a more compelling package.
That wasn’t my reading at all–the size of the backlog is a different issue than work-in-progress.
It’s totally fine to have a big backlog and be chunking it up into little bits and chewing through it–this is in fact one of the keys to getting a backlog back under control if you can’t just axe things entirely.
It’s similarly terrible to have a “backlog” of zero with a thousand things in-progress.
I really enjoyed the previous article in… what I’m guessing is a series, the one about long-term plans. This one is interesting but I think it’s missing a clear statement of the assumptions behind the model that it uses.
Specifically, some the ones I could infer, at least, are:
I think these are… pretty bold assumptions to make about software development in general. Maybe there are some specific aspects of software development where this is true – well-scripted procedures with little development effort involved (deployment, various migrations) or even to some development activities (some types of refactoring). But I’d be wary of taking this analogy too far.
The old-fashioned software developer in me is also compelled to point out that, in a professional setting, “finishing what you start” is productivity. If you have ten tasks that you started and none of them are finished, your team’s effective productivity is zero – even adjusting for the “move fast and break things” mentality, you can’t usually sell unfinished tasks. The only kind of unfinished software you can sell is the one that has some finished features, but not all of the announced features. You can start ten hobby projects at once and never finish them and it’s still a win, but when you have users, unfinished code – just like uncooked burgers – doesn’t count towards productivity.
Yeah, I mean just on the burger analogy, there is time when the meat is just sitting on the grill and you’re waiting for it to cook through. Any normal chef will try to arrange things so that the cooking time can happen in parallel with prep for the other ingredients. So the first thing to do is to get a patty, put in on the grill, then you find the bun and start toasting that as well, etc. If you watch the chef at a dinner like a Waffle House, there are pretty big gains to be made from concurrency, although obviously you reach a limit where you have to limit work in progress. Even there you can do things like have the tickets arranged physically so that the waiters can add to the queue independently from the chef pulling out of the queue. It can actually be a very deep analogy if you think about how that would apply to software management or program execution.
The most subtle reason teams switch tasks, in my experience, is a sub-optimal definition of “task”. Back in the bad ol’ days of manual testing and certification you could have shown this blog post to anyone and people would have nodded furiously along at everything in it. “Of course we understand basic queuing theory, we’re engineers. We minimize context switches and batch size.” But they were still spending weeks in their release process because:
The big shift in mindset that happened in the 2000’s was to realize it’s all the same task, to stitch together these workflows and vertically integrate them.
My lesson from this story is to constantly ask myself, “where else is division of labor causing muda?”