we are evaluating Jira Agile now. In the documentation on estimation:
you elaborate a lot on not reestimating issues. Because it is a bit against common sense (the team is asked to ignore information it have also new issues anyway will be estimated already with this information in mind) and against official Scrum practices, I wander if you have any data conforming it? Even if you create a simple statistical model of estimation, in theory not reestimating is counter productive (although in practice it should not matter much if one reestimate or not).
When people estimate a backlog, they should be using triangulation to compare a story with another and determine if it is smaller, or bigger, or twice as big and so on. They should use relative estimates.
Let say for example that they have estimated many 3 story points user stories in the backlog. The first one takes 30hrs to complete, the second 54hrs and the third one 17hrs. You see that there is a big variance in the number of hours used to complete a 3 points story. The important thing is that when you estimated all these issues, you assumed them to be roughly the same size. Even if you have new information, it doesn't matter as long as the issues are deemed the same size.
Maybe this article can also help you understand some problems that can be seen with estimation.
Thanks for the reply. I know the triangulation technique, but it does not answer the question. In your example, if we put the story in 3 points bucket, and if after gaining new information it still fits in this bucket, fine, there is no question of reestimating (as reestimating would give the same result). The problem is when now we see that we have mistakenly assigned the story to 3 points bucket but it should be in 8 points bucket. I cannot see any rational reason to not reestimate it, what if we have a new similar story should we put it in 3 points bucket or 8 points bucket, or a story that is ~6 points - it is bigger than 5 points bucket's story but smaller than 3 points bucket's story, should we put it in 5, 3 or 2 points bucket?
If you build a simple statistical model of estimation (similarly as in the quoted article) the only result of this practice (besides of making the triangulation more difficult) is increasing variation of velocity (although with 10+ stories per sprint and 10-20% stories to reestimate, the effect should be small enough to neglect it).
OK I see two reasons why this practice could be useful (but they are not mentioned in the documentation):
1. to prevent skewed re-estimation ( for example, due to pressure from the management).
2. it does not matter, as long as you prevent such stories to be used in the triangulation, so no reestimating saves time.
I like how Atlassian explains it here: https://confluence.atlassian.com/display/AGILE/Estimating+an+Issue
But what about when teams realise they've gotten it wrong?
Consider the following scenario:
- Issue X has an Original Estimate of 5 days.
- The issue's estimation was too optimistic and they realise it's actually 15 days before the next sprint is planned.
Some people would argue that using the Original Estimate will endanger the sprint's success, because the team will take in what they think is 5 days of work into the next sprint when it's actually 15 days work.
However, the inaccurate estimate of 5 days is unlikely to be an isolated occurrence, in fact the estimates are always going to be wrong (some very little, some wildly so). Often this will be discovered after the sprint has started rather than before. As long as the team estimates the same way across the whole backlog, this will work itself out over time. For example, if they always underestimate, they may find that for a 10 day sprint with 4 team members they can only really commit to 20 days of their estimation unit. If they have established a stable velocity then this has no effect, because from a planning perspective we can still reliably estimate how much work we'll get done in upcoming Sprints.
take a simplified model, suppose there is sprint with capacity 15 person days. In one sprint we take 3 5-day stories that are roughly estimated correctly, so the velocity is 15, in next sprint we take 3 5-day stories with one underestimate (it is really 15-day story) so now the velocity is 5, in the next sprint the team again completes 3 5-day stories - the velocity is 15, for the next sprint we get again 15-day story estimated as 5-day story but now we know it is underestimated so either we do not re-estimate and got velocity 5 or re-estimate and got velocity 15 and so-on.
So without reestimating the velocities would be 15,5,15,5....,15,5,15
whereas in the reestimating case : 15,5,15,15,15,....
So in the first case what we got is just greater variation of single sprint velocity (although the mean of the couple of last sprints will be stable in both cases).
Or take an extreme case when at the beginning the team has almost no knowledge, and each story is estimated with the same value, say 5 story points. Should the team reestimate the stories as they get more information?
Regarding the quotation:
"As long as the team estimates the same way across the whole backlog, this will work itself out over time." Sure, but the reasoning is about situation when it is not the case.
Atlassian ranks project attributes as the third most important factor impacting performance in the category of data. It’s not surprising, since project attributes are precisely the rules used to ma...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs