Easier, More Accurate Feature Timeline Estimations

Feature Estimations are a Train Wreck

Zachary Keeton
2 min readNov 30, 2020

Estimations are usually wrong. They're time consuming and inaccurate. There are plenty of estimation strategies like t-shirt sizes, Fibonnaci numbers, and other weighting systems. Then comes the “poker planning.” The idea is that these humans are supposed to sit around a table, look at a piece of work and vote or come to some kind of consensus on the level of effort required for each one through the wisdom of the group. That sounds okay on paper, but in practice, it's so time consuming people seldom actually do a full fledged poker planning session. In reality, tech leads normally split up the work ahead of time. Then during the sprint planning, the lead just asks the developer how much they can get done in two weeks. The developer shrugs and usually takes on more than they can handle.

A Better Way

If you have a team that's worked together for several sprints, you have a history of closed issues. These closed issues are normally totally ignored once they are closed. Usually, in a webdev team: you create issues, eventually assign them to a sprint, code them, close them, and forget about them. These closed issues are practically banished to the darkness. However, as we'll see, they are a great source of historical data that has been underutilized and can be used to help us create much more accurate data driven estimations of future work. The best predictor of the future is the past. Thus, every issue that you see now is going to be very similar to an issue you've already had.

All you have to do is go to your past issues, find similar ones to your current issue, and calculate a process time for each. By that I mean, how long in whole number days did it take an issue to start being coded until it got merged into develop? This would include development, peer review, and QA time. Once you have the process times for all the similar issues, you would do a simple calculation to generate a estimated process time for the issue in hand.

Do this for all current issues and you’ll have a data-driven estimate for each issue that will be more accurate and more inexpensively obtained than the manual planning poker type sessions.

Finally, you can data mine previous releases to either 1) estimate the completion data of a given set of issues or 2) estimate the number of issues the team can complete in an given time interval. This is accomplished simply by comparing the number of total development days to complete those features the number of days it took to release. Now we have a proportion which we can apply to the current feature bundle/release time box — akin to a velocity metric.

My team is going to do some experiments with this approach. I will update this post with the findings. We are interested in estimation acccuracy, estimation cost, and the potential for process automation.

--

--

Zachary Keeton

A 15th-year Web Dev/Engineering Manager. Formerly building products and leading teams at Plus One Robotics in San Antonio, Texas, USA