#195: Why Forecasting Works (and Estimating Fails)

#195: Why Forecasting Works (and Estimating Fails)
The Humanizing Work Show
#195: Why Forecasting Works (and Estimating Fails)

Aug 25 2025 | 00:13:29

/
Episode 195 August 25, 2025 00:13:29

Hosted By

Richard Lawrence Peter Green Angie Ham

Show Notes

Estimating feels responsible and concrete, but decades of research and real-world examples show it’s systematically wrong. In this episode, we share Kahneman’s work on the planning fallacy, Flyvbjerg’s analysis of megaprojects, and our own stories of estimation gone sideways. Then we show how reference class forecasting—using past outcomes instead of guesses—creates better plans, restores trust, and helps leaders place smarter bets. Forecasting is also a central move in our Complexity Aware Planning, Estimation, and Delivery (CAPED) framework, giving organizations a reliable way to plan while managing complexity.

Show notes, links, and transcript: https://www.humanizingwork.com/estimating-bad-forecasting-good-episode/
Email us with your thoughts: [email protected]
Connect on LinkedIn: https://www.linkedin.com/company/humanizingwork

View Full Transcript

Episode Transcript


 Peter Green: What if you could make reliable forecasts without endless breakdown and estimation sessions, or the false confidence they create? Forecasting is important and necessary for a lot of reasons, but in today's episode we'll show the research on why traditional estimation approaches are the wrong way to create an accurate forecast. 
 Or to put it in the most simple terms, forecasting, good. Estimation, bad. 
 Richard Lawrence: Several years ago, we hired a company to finish our basement, and because my family is musicians and we had our three sons living at home practicing music during the day as part of homeschool, including drums and bass, and my office was right above the basement, we wanted to build a sound isolated music room into our basement. 
 So we did all the research to find a basement contractor with good results in our area. They actually marketed based on coming in on time. Their sign that they put in your yard said five weeks to complete. Now, I asked them if they could do this kind of sound isolated room in a basement, and the salesperson said, we've never done it before, but let me go off and give you an estimate. 
 So he went back to their crew and they did a breakdown of everything that would involve, like the double drywall, the extra fasteners to attach it to the frame, but float it from it, so sound wouldn't go through it. The extra work to have the baseboards and doors isolated. And they came back with a spreadsheet of all the extra work they were going to have to do for this room in the basement, and said it wouldn't be too bad and we could still fit it into our schedules. It's gonna be great. 
 We were excited. We agreed. 11 weeks later, I was putting post-it notes on their sign in my yard updating it to fit when it would be complete. 
 Peter: That's the snarkiest awesome response ever. 
 Richard: I know I joked about it for a couple weeks and my wife dissuaded me from doing it. And finally I was so frustrated that it was like, no, you're not gonna market based on being on time with my project. And finally, week 17, they took the sign away. It didn't even work as well as it was supposed to. 'cause they didn't really understand what they were doing. 'cause it was complex and they'd never done it before. 
 They didn't make any money on it because they had to give us credits for some of the overruns. It was a disaster of a project despite a really good looking breakdown and estimate at the beginning. 
 Peter: I don't think, Richard, that your basement is unique among all projects. In fact, I think most projects have this characteristic. And what I've seen is that the result of that, is that the next time that company is asked to do some kind of a custom thing, they're gonna do that spreadsheet and then they're just gonna double everything and say that's the estimate. 
 Richard: They actually repainted all their trucks to say seven weeks instead of five. So they built a buffer into everything after that project. 
 Peter: Yeah. There's this old saying, right? We should underpromise and overdeliver. 
 Richard: Yeah. But leaders and customers figured it out. I remember a manager once saying he believed his teams were always sandbagging their estimates, so he would assume they're conservative and cut them in half to produce a deadline that would be more realistic and light a fire under their teams. 
 And it worked often enough, teams would actually hit that deadline, that it reinforced that belief for him and he kept doing it. 
 Peter: So what we have created, in the project management world, is a situation where nobody trusts anybody's word. We're spending a whole bunch of time and energy making up stories about how long things will take, and then other people making up different stories about what they meant by that story. 
 The result is nobody's really happy about it. 
 Richard: Right, and there are some corners of industries like software development, like the #NoEstimates push a few years ago, where people have said, nobody's good at this. We can't do it. Let's just make things small and focus on the most important things. And if you've got the right context for that, I actually think it's fantastic. 
 If you've got a stable team and you can always feed the most important work into an input queue, and you know they're gonna be maximizing value, they're gonna be working responsibly, I actually quite like that no estimates kind of approach. 
 But most of us keep trying to get better at estimating because it feels like the responsible, concrete, tangible thing to do. So we're in a situation where leaders keep asking teams for estimates. Teams keep trying to give them, and hoping this time it will be more accurate. 
 Peter: All of this plays right into what Daniel Kahneman called the Inside View. It's a description of how most of us learned how to estimate. When we're taking an inside view, we break things down into the components, just like Richard's construction crew did. Then for each piece of it, we assume the best case and ignore the broader context, and then we fixate on the unique details of the project. 
 Kahneman, when he wrote about this, told a really fascinating story about how this really is a heuristic. In other words, it's a decision making shortcut that our brains use. It's not something that's easy to overcome. He talked about how he and a group of colleagues were writing a textbook on judgment and decision making. When they tried to estimate how long it would take, each person thought through how long it would take to draft a chapter to edit it, how long publishing would take, they came up with about an average of two years. 
 Then one of the colleagues said, maybe we should look at the outside view of this. We talk about this in the book. Uh, how long have similar textbook projects taken for other groups? So they came up with what they called a reference class. And the reference class of other textbooks, that are of similar length for a similar audience, showed that most of those textbooks took seven to 10 years. 
 Richard: Oh no. 
 Peter: And about 40% of them never got finished at all. Despite knowing this, Kahneman admitted that the team ignored the outside view and pushed ahead. The book ultimately took eight years to complete. So this, I think, summarizes how hard it is to switch to what Kahneman called that outside view. When we rely on that inside view, we just systematically underestimate costs and timelines, even when all of the evidence to the contrary is right in front of us. 
 Richard: I love it. And it's people that know better, they still pushed ahead with the inside view estimate. And this is really common because when you do those reference class comparisons, they're often way longer than you want it to be. 
 Another example of this, building on Kahneman's work, is Bent Flyvbjerg's work on mega projects and how those tend to do. Even with lots of experts in the room, high stakes, lots of money involved, they're still wrong in systematic and predictable ways. Flyvbjerg analyzed hundreds of mega projects like bridges, tunnels, rail lines, IT systems. He wrote about this in his book, Mega Projects and Risk, which was actually the first textbook I had in civil engineering school at Cal Poly, which I think was great. 
 Uh, and Flyvbjerg found that cost overruns were, uh, let me look it up here. 28% for roads, 45% for rail, and then there's outliers like California high speed rail, and 90% for IT projects. One example that he gives is the channel tunnel between England and France that ended up costing 80% more than estimated and brought in less than half the expected revenue. 
 And these weren't rookie mistakes. This is governments, engineers, financiers with billions on the line. They still fall into that same optimism planning fallacy trap. Expertise, lots of experience doesn't protect us from the inside view problem. So if experts with billions on the line can't estimate accurately, our average software projects and team initiatives are not gonna magically do better. 
 Peter: But this doesn't mean we're doomed to always be wrong. Thankfully, we can do what Kahneman taught instead of what he did in that first project and what he ended up learning. And also what Bent Flyvbjerg recommends now when he consults on big projects. When somebody comes to you with a project and they say, Hey, how long will this take? 
 Instead of breaking it down into the components and figuring out, what are the epics, what are the stories, what are the tasks, or however you do it. Before you start, take an outside view, use what he calls Reference Class Forecasting. So you look at similar projects, similar efforts. Go find out how long those took, about how much they cost, and use that as a forecast. 
 Just ask. What is this like and how long do those usually take? 
 Richard: Now this does feel less satisfying when you have the spreadsheet that breaks it all down and adds it back up. You can feel confident and it's usually gonna be shorter than what you would get from actual historical experiences. 
 Not to mention this doesn't give a single number. Usually it gives ranges and probabilities. You can say things tend to take this long. 
 While we're talking about probability, it's worth mentioning that the probabilities around these things aren't a normal distribution. This is a mistake we sometimes make when we're beginning to apply statistics to estimating or forecasting. 
 Some things are gonna be faster, some things are gonna be slower, and it'll all work out in the end. Um, in fact, that came up in the conversation with that contractor, like, could some of these things be underestimated here since we've never done it before? And they said, oh, we'll make it up elsewhere. It'll be fine. 
 A common example that illustrates why the risk isn't shaped like that is you think about flights that you've taken, and think about the latest a flight has ever arrived, relative to its scheduled arrival time. Like I've had some that have been 24 hours late or, or never. Right, they get canceled. They never arrive at all. 
 I have never been on a flight in all my years of travel for work all over the world that arrived 24 hours early. It's not possible. The longest commercial flight is something like 18 or 19 hours. Longest one I've taken is probably 10. So I've never had a flight arrive 24 hours early, 'cause they can't. 
 It's the same thing on our tasks and features and projects and whatever we're doing. They can be a little bit faster than planned and they can take dramatically longer than planned. So the risks are really unevenly distributed, and I think this makes it even more important to look at real past examples and look at ranges and probabilities. 
 Where most of them end up in this range. And be aware that if we're wrong, if we're outside the range, it can be really long the other direction. So we may wanna do some other things to protect ourselves from that too. 
 Peter: We need forecasts in order to figure out how to prioritize and allocate our money. You can think of prioritization as "where are we gonna place our bets?" Using accurate forecasting, like through Reference Class Forecasting, allows leaders to make better bets. 
 But as Kahneman experienced and as we have experienced, it requires us to shift how we think about forecasting, away from sort of that false sense of certainty that comes from a really precise work breakdown and estimating of all the pieces, and towards that more probabilistic thinking. 
 Richard: So you may actually need to prove that this works in parallel, using a Reference Class Forecasting approach alongside the traditional inside view over time to see which is more accurate, kind of like Kahneman and his group of textbook authors experienced, having both of them side by side can really paint a picture of how it works. 
 This is why we say forecasting is good, estimating is bad. Forecasting is based on previous data. Estimating tends to be a flawed guess about the future. 
 Peter: So the next time someone asks you for an estimate, instead of starting to break things down into its pieces, the inside view, go outside view and ask, what's the reference class? What usually happens? 
 But you may need to earn permission by trying an experiment. You could do that this week where you replace an estimate with a forecast maybe in parallel. 
 Richard: By the way, Reference Class Forecasting at a couple levels of detail is a key move in our Complexity Aware Planning, Estimation and Delivery approach, CAPED. 
 Reference Class Forecasting in CAPED serves an important purpose, giving leaders a good date to make budget and planning decisions about an initiative while buying time to address core complexity. So when you don't have all the details, you can still get a good forecast, but it is only one part. 
 The components of CAPED form a holistic system that helps organizations get the best of both worlds, agility and useful planning. For example, bringing complexity forward is a great way to reduce the risk of being wrong and not have that late or canceled flight issue that I mentioned earlier. 
 So if that's something that your organization needs, join us in October in Boston at our Certified CAPED Consultant Workshop, or send someone from your organization. 
 The level of complexity and the fierceness of competition around organizations is only increasing, and CAPED may be the framework that helps you and your team break through and stand out. 
 Peter: And if you get value from the show and you wanna support it, the best thing you can do if you're watching on YouTube is subscribe, like the episode, and click the bell icon to get notified of new episodes. And drop us a comment with your thoughts on how you do estimation today and how you might want to do it in the future. 
 If you're listening on the podcast, a five star review makes a huge difference in whether other people who'd benefit from the show find it or not. 
 Thanks for tuning in to this episode of The Humanizing Work Show and we'll see you next time.

Other Episodes