Thanks to Steve Wells, who attended the NoEstimates session at the 2018 Agile Cambridge conference, we now have an online version of the game, available to all. For more information, please see the Facilitator Guide.
To promote the exchange of ideas and advance the state of the practice, I have created a LinkedIn group dedicated to NoEstimates. It’s an open group, so feel free to join.
I’ll be partnering with O’Reilly Online to present the NoEstimates workshop online. Tentative dates are:
- Thursday, November 14, 2019, 11:00am – 1:00pm CT
- Tuesday, January 7, 2020⋅11:00am – 1:00pm CT
Check back here for more info as it becomes available.
One of the core underpinnings of the practice of estimating effort is the assumption that effort strongly correlates with time. But does it? That’s a question that I always ask teams who are grappling with the concept of NoEstimates.
Mattias Skarin, in his helpful book Real-World Kanban, shared his own research about the correlation of upfront estimates and delivery times. Here’s what he writes:
The interesting question here is how the actual delivery times correlate to developers’ upfront estimates. To help answer this, we had developers estimate sizes using the following buckets: small (two to three days), medium (one to three weeks), and large (longer than one month). We then correlated the initial sizing estimates with lead-time output. Take a look. Is the initial sizing a good predictor of when you can expect to get your stuff? In our case, the surprising truth was a resounding “no!”
He includes this supporting chart:
I’ve since replicated his research with my own teams and found the same results: Weak correlation between upfront estimates and delivery times. One “high-performing” team even generated a negative correlation between their feature estimates and actual delivery times. Think about that for a second: The smaller they estimated their features to be, the longer it took them to deliver, and vice versa! And again, this wasn’t a bad team; quite the contrary.
A note about definitions: I use Delivery Time to refer to the time it takes from commitment date to delivery date (however that is defined). A couple of important components are:
- The time you’re referring to when you estimate is the same as the end-to-end duration of delivery time. For instance, if you make your upfront estimates based on the time from when a developer starts on a story to when it is available in a QA environment, that needs to be understand as the time you’re trying to estimate.
- However, since no one requesting software really cares how long it takes to go from Dev to QA, the best understanding of delivery time is from commitment to production. Most business people want to know when something will be “done” (or at least should) based on whether it’s in production.
With those considerations in mind, I invite you to do your own correlation analysis. It doesn’t require you to change anything about your process but to simply observe and record what you’re already doing: The commitment and delivery dates, along with the upfront estimates. Feel free to add to my public data set. Then you can see for yourself whether the assumption of correlation holds true for your team. If not, you may want to consider a different approach!