Really well written and jibes with my experience. I think where it falls down in reality is at the resourcing level, i.e. which people do you need for a project and how long do you need them. Few managers or contractors are willing to say, “Well, it depends on how valid the project turns out to be, so give me as many resources as I want for an indefinite period of time,” because it undercuts the argument for getting any resources in the first place.
You are lucky if you work in an org or team where have earned enough trust to operate in a genuinely Agile way, but most fill out big Gantt charts in great detail without saying what they already know: that such charts are at best half true and will be a drag on quality in proportion to how wrong they turn out to be.
Also, a small correction:
Instead, I’m telling you to either scrape your long-term plan altogether or avoid adding detail to longer-term goals.
Thank you so much for your input and the kind words, Adam.
I think you’re absolutely right in your comment about how hard it is to earn the trust necessary to operate in an Agile fashion. Although, I must say that most companies who spend a significant amount of money with these types of agencies tend to listen to them a bit more than they usually would listen to internal resources.
I also just updated the post to fix the mistake with the word “scrape”.
There are two sub-genre of long-term plans: the “imaginative” and the “naive” ones. The “imaginative” sub-genre assumes nothing unplanned will happen. The “naive” one presumes that you can plan for unexpected events by adding a long enough buffer to the plan (fillers).
I know of a third genre for which I don’t have a good name, sadly, but which I think you’ll find entertaining.
At one point I worked on a pretty large codebase (think big operating system & userland suite for industrial machines, sort of). This thing had a module that was very much central to the entire edifice – a module which enumerated hardware and peripherals, signalled capabilities, aggregated sensor input, stuff like that. When a new product was added to our portfolio, this module had to be taught about some of the new gadgets on it.
This module had been written by a single person over several years. It was, by far, the worst piece of code I have ever read. Think macros called DO_SOME_THING(x, y) which consisted of a call to a function called do_some_thing(y, x) (yep, arguments reversed) which called two other functions called _some_thing_do(x) and _do_some_thing(y). Me and several of my colleagues strongly suspected that it had been written that way specifically in order to make the person who wrote it irreplaceable because it was an absolute jungle, and nobody wanted to get anywhere near that thing.
The caveat was that it took like three months of work to add support for a new piece of hardware as, unsurprisingly, this module was littered with every kind of bug you can imagine, and every time you tried to teach it new tricks, you ran into some of the bug nests. At first, that hadn’t a big problem, as there was a lot of stuff involved in getting new hardware going, so adding support for the Next Big Thing in our portfolio took like an year anyway. But eventually a bunch of things about how that was done were changed and it suddenly became pretty much impossible to go to management with a Gantt chart that had a three-month task called “Airdrop someone into the Amazon jungle and hope they come out with a working program”. (It wasn’t called exactly that but it might as well be).
So people came up with a simple solution: that task was not on any plan.
A small subset of it was left in so as not to look like there were any “unallocated resources”, but it was only a small, one-week task. Every plan was eventually amended and prolonged by two months and three weeks, plus or minus the usual buffer. Everyone knew this was going to happen, to the point where we could reliably plan our summer vacations around it.
For my job it helps that most of our management and PM’s are ex-engineers, and so when you have a big chunk in your Gantt chart for “unknown unknowns” they just nod and say “yep looks about right”.
Every system that works long-term has some kind of closed-loop control with feedback. Imagine driving your car without being able to see the road, or trying to get your house to a good temperature using a furnace with no thermostat. This is more or less the goal of agile with story points: the story points are arbitrary units, but after some time you should be able to calibrate how many story points you can realistically accomplish. Of course that’s much easier said than done.
The agency part at the end matches my experience. From both sides it makes more sense to pay per hour and keep in sync. Some of my non-technical clients have previously been burned by doing it the other way. It’s actually a lot easier to explain these things, than I thought it would be. If it has to be fixed budget and time the only sensible thing one can do is to technically over-budget a lot. For some situations that’s required, so it’s completely fine to do so.
It’s not too different from having more fail-over standby database servers that you technically do not need, unless something goes wrong. Of course they cost money, so one needs to budget for them.
Whether these things are understood can be a good sign for how well a project is managed or how experienced programmers (and everyone involved) really are.
I also want to add another exception though: Repeated work. There is scenarios where one develops very similar things over and over, so what can go wrong is well known and limited. So the backup budget/times can be very slim, but this is very context dependent of course.
Really well written and jibes with my experience. I think where it falls down in reality is at the resourcing level, i.e. which people do you need for a project and how long do you need them. Few managers or contractors are willing to say, “Well, it depends on how valid the project turns out to be, so give me as many resources as I want for an indefinite period of time,” because it undercuts the argument for getting any resources in the first place.
You are lucky if you work in an org or team where have earned enough trust to operate in a genuinely Agile way, but most fill out big Gantt charts in great detail without saying what they already know: that such charts are at best half true and will be a drag on quality in proportion to how wrong they turn out to be.
Also, a small correction:
Should be “scrap” instead of “scrape.”
Thank you so much for your input and the kind words, Adam.
I think you’re absolutely right in your comment about how hard it is to earn the trust necessary to operate in an Agile fashion. Although, I must say that most companies who spend a significant amount of money with these types of agencies tend to listen to them a bit more than they usually would listen to internal resources.
I also just updated the post to fix the mistake with the word “scrape”.
I really appreciate your thoughtful comment.
I know of a third genre for which I don’t have a good name, sadly, but which I think you’ll find entertaining.
At one point I worked on a pretty large codebase (think big operating system & userland suite for industrial machines, sort of). This thing had a module that was very much central to the entire edifice – a module which enumerated hardware and peripherals, signalled capabilities, aggregated sensor input, stuff like that. When a new product was added to our portfolio, this module had to be taught about some of the new gadgets on it.
This module had been written by a single person over several years. It was, by far, the worst piece of code I have ever read. Think macros called
DO_SOME_THING(x, y)
which consisted of a call to a function calleddo_some_thing(y, x)
(yep, arguments reversed) which called two other functions called_some_thing_do(x)
and_do_some_thing(y)
. Me and several of my colleagues strongly suspected that it had been written that way specifically in order to make the person who wrote it irreplaceable because it was an absolute jungle, and nobody wanted to get anywhere near that thing.The caveat was that it took like three months of work to add support for a new piece of hardware as, unsurprisingly, this module was littered with every kind of bug you can imagine, and every time you tried to teach it new tricks, you ran into some of the bug nests. At first, that hadn’t a big problem, as there was a lot of stuff involved in getting new hardware going, so adding support for the Next Big Thing in our portfolio took like an year anyway. But eventually a bunch of things about how that was done were changed and it suddenly became pretty much impossible to go to management with a Gantt chart that had a three-month task called “Airdrop someone into the Amazon jungle and hope they come out with a working program”. (It wasn’t called exactly that but it might as well be).
So people came up with a simple solution: that task was not on any plan.
A small subset of it was left in so as not to look like there were any “unallocated resources”, but it was only a small, one-week task. Every plan was eventually amended and prolonged by two months and three weeks, plus or minus the usual buffer. Everyone knew this was going to happen, to the point where we could reliably plan our summer vacations around it.
Brilliant! I should start using this technique.
For my job it helps that most of our management and PM’s are ex-engineers, and so when you have a big chunk in your Gantt chart for “unknown unknowns” they just nod and say “yep looks about right”.
Every system that works long-term has some kind of closed-loop control with feedback. Imagine driving your car without being able to see the road, or trying to get your house to a good temperature using a furnace with no thermostat. This is more or less the goal of agile with story points: the story points are arbitrary units, but after some time you should be able to calibrate how many story points you can realistically accomplish. Of course that’s much easier said than done.
The agency part at the end matches my experience. From both sides it makes more sense to pay per hour and keep in sync. Some of my non-technical clients have previously been burned by doing it the other way. It’s actually a lot easier to explain these things, than I thought it would be. If it has to be fixed budget and time the only sensible thing one can do is to technically over-budget a lot. For some situations that’s required, so it’s completely fine to do so.
It’s not too different from having more fail-over standby database servers that you technically do not need, unless something goes wrong. Of course they cost money, so one needs to budget for them.
Whether these things are understood can be a good sign for how well a project is managed or how experienced programmers (and everyone involved) really are.
I also want to add another exception though: Repeated work. There is scenarios where one develops very similar things over and over, so what can go wrong is well known and limited. So the backup budget/times can be very slim, but this is very context dependent of course.