1. 10
  1.  

  2. 3
    • Probability estimates is a good idea. More on the topic in general: Use Normal Predictions
    • Date estimates assume a certain process. Sequential execution, to be exact. It’s a very rare process to find in practice. Change of priorities? All dates are off. Emergency? All dates are off. With any externality the estimate has to be redone and the dates would shift. This makes the predictions appear more volatile than they actually are. A week worth of effort is a week worth of effort even if you take a sick day in the middle. Delivery date still shifts but now it’s a communication issue and not an estimation issue.
    • “Business cares about dates” is a process flaw. Dates are implicit dependencies. For example, sales is prepared to launch a campaign advertising a new feature on date X. Date X is the estimated delivery date for the feature. It is backwards to rush the feature because of ad campaign. Business should make the dependencies explicit and care about mitigating bottlenecks instead. That is, of course, if it cares about quality of their product more than meeting arbitrary deadlines.
    • Value estimation. Now we’re stepping into prioritisation territory. This is usually the business’ responsibility, not a development concern. These kind of decisions are usually made by completely different people in larger organisations. Even in a small start up there are people who’s job to make these kind of decisions. The org has to be very small or one’s role has to be precisely on the intersection of business and engineering for it to be the thing one does. Like, you’re a CTO, Engineering manager, or one of the two engineers (and 25% of the staff). Otherwise concerns of business value/priority and task estimation rapidly separate.
    1. 2

      Date estimates assume a certain process.

      Not if you bake the probability of change of process (shifting priorities, sick days, etc.) into the estimation; which is something a skilled forecaster does, in my opinion.

      Delivery date still shifts but now it’s a communication issue and not an estimation issue.

      Sure, delivery dates shift. Wouldn’t it be nice to know the probability of this up front, rather than wait to be surprised when it happens? Wouldn’t it be nice to control the delivery date to hit a particular risk level of it shifting?

      Caring about dates is not equivalent to “rushing the feature because of an ad campaign”. It is simply recognising the economic cost of missing deadlines and accounting for it up front instead of when it’s already happened.

      These kind of decisions are usually made by completely different people in larger organisations.

      Regrettably! I think good developers have a lot to bring to the table in these discussions. And deserve to, at the very least, receive information about the value estimations of the tasks they’re asked to do, and the ones they aren’t.

      One of the strengths of quantified estimation is that it forces a person to zoom out and take a whole-org view on a topic – a good exercise for everyone!

      1. 2

        Not if you bake the probability of change of process (shifting priorities, sick days, etc.) into the estimation

        Well, that’s the thing. Effort estimate is relatively simple. You recall a bunch of similar tasks you did, recall how much time they took, from that you can get either 50% or 90%, or whatever. It’s so simple that most people can do it entirely in their mind.

        Baking in uncertainties into estimation makes it much more complex. You still start with an effort estimate. That’s your no-interruption baseline. But then it becomes very complicated very fast. What is a probability of a delay? One sick day is a constant probability (in the limit, in reality it probably varies considerably through out the year). It just spreads out your estimate distribution. But what is the probability of the second sick day after the first? A third? What is a probability of an emergency? What is a probability of a change of priority? To bake all these parameters in you have to multiply many different distributions it’s hard to do correctly even with tools and even in a not-quite-rigorous way. I don’t have hard numbers to back it up but I feel like in the end you’ll get confidence intervals so wide that it wouldn’t be useful.

        I’m having hard time believing even the best estimators do it this way. What’s probably happening is a combination of the following things: good calibration on effort estimation, added buffer that happens to match probability of a delay, and scope cutting.

        • Good calibration on effort estimation comes from personal experience. The more things like this one they did in the past that better they would be at estimating effort.
        • Added buffer. This is specific to the organisation. So the longer they work at this place the better they’d be at this. But this doesn’t transfer to the next org. Like, at last place Thursday was a day of meetings and could be effectively just written off, but at the current place they don’t have it so this doesn’t go into buffer, for example.
        • Scope cutting. Basically, work takes all time allocated for it. If there’s not enough time some corners probably can be cut. This does not reflect the accuracy of estimates but if estimates accuracy is a tracked metric it will be gamed.

        Either way, uncertainty is shifted to the developers (or marketers, or sales, whatever). They have to meet arbitrary dates instead of the org acknowledging uncertainties and dependencies and working with that.