Overconfidence and underestimating risk
Recently, a study by McKinsey & Company evaluated the demand forecasts of pharmaceutical companies.1 Here are two key findings from the study:
- For drugs that have been on the market for two to five years, the average error in demand estimates exceeded 40% of the actual revenues.
- One to two years prior to launching a new drug, the average error in estimating demand exceeded 70% of the actual demand.
The last statistic is troublesome for PPM's current "best" practices. FDA approval takes two years, so an average forecasting error of 70% applies to drugs that have finished clinical trials. The error in the demand estimates for drugs being considered for phase I clinical trials, which occurs ten years before launch, might be meaningless. Project evaluation and selection based on these forecasts may be more random than right. (For an example and to see more results of the aforementioned McKinsey study, see my discussion, "Revenue forecasting errors dominate project evaluations.
If errors in demand forecasts produce unreliable project evaluations, harming project selection and destroying value, why do PPM managers use such forecasts? In a word: overconfidence. This is the technical term for having too much confidence in one's estimates, such as revenue forecasts, but it applies to estimating costs, probabilities, project attributes and other values that managers estimate when performing PPM. Research shows that, with near ubiquity, managers are overconfident in their estimates.
Before presenting some research results, let's see how scientists measure overconfidence. A scientist asks a manager to estimate a variable. However, instead of soliciting a point estimate, the scientist asks the manager to specify a range, a high value and a low value, for which the manager is X % confident the correct answer is within the range. For an example, let's use a confidence level of 90%. The manager specifies a low value and a high value so that he or she is 90% confident that the variable's true value is within that range. The scientist will ask the manager to specify a 90% confidence interval for many variables.
If the manager correctly assesses his or her knowledge, only 10% of the variables' true values will fall outside of the manager's specified ranges. If more of the true values fall outside the ranges, the manager made the ranges too narrow. He or she has too much confidence in the ability to estimate, believing that his or her estimates are closer to the true values than is the case. The manager is overconfident.
Here is an example of one such test. There are ten questions and the exam asks for 90% confidence intervals. Fill-in the test below and when done, click the "Score Test" button. The webpage will display the correct answers and the number of the answers that are within the ranges you specified. If less than 90% of the correct answers are within your ranges, you are overconfident about your knowledge.
Here is an example of one such test. There are ten questions and the exam asks for 90% confidence intervals. Fill-in the test below and when done, look at the answers at the end of this discussion. If less than 90% of the correct answers are within your ranges, you are overconfident about your knowledge.
IMPORTANT: This webpage needs JavaScript to processes your answers, but Javascript is currently disabled in your browser. To take the test, print this webpage. The answers will print at the bottom of the page, so you can take the test and score it yourself.
Specify 90% Interval | Correct Answer | ||
---|---|---|---|
Lower | Upper | ||
1. How many patents did the U.S. Patent and Trademark Office issue in 2013? | |||
2. How many of Fortune's 2010 "Global 500," the world's biggest industrial corporations (in sales), were Chinese? | |||
3. How many passenger arrivals and departures were there at Atlanta's Hartsfield-Hackson Airport in 2013? | |||
4. What was the weekly circulation, both print and digital, of the Wall Street Journal as of March 2013? | |||
5. How many personal computers (desktop and laptop) did Dell computer sell in 2013? | |||
6. How many mater's degrees in business or management were conferred in the United States in for the 2009-2010 academic year? | |||
7. How many fatalities occurred worldwide in commercial airline accidents from 2001-2010? | |||
8. What is the shortest navigable distance (in statute miles) between New York City and Istanbul? | |||
9. What was Toyota's worldwide sales of automobiles in 2013? | |||
10. What was to total US trade deficit with Greece in 2013 (enter as a positive number)? | |||
> |
Now that you've taken the above test, let's see how managers performed on similar tests. The table in Figure 3 presents the results from testing thousands of managers in seven industries. Unlike the prior test, the studies tested managers on knowledge of their own industries and firms. In the table, the target error rate reveals the confidence interval of the tests. If asked to make 90% confidence intervals around their estimates, the target error rate is 10%. Notice that the actual error rates are much greater than the target rates, showing that managers were overconfident.
Industry of Managers |
Subject Matter of Tests | Target Error Rate (%) | Actual Error Rate (%) | Sample Size |
---|---|---|---|---|
Advertising | Industry | 10 | 61 | 750 |
Industry | 50 | 78 | 750 | |
Computers | Industry | 5 | 80 | 1,290 |
Firm | 5 | 58 | 1,290 | |
Data processing | Industry | 10 | 42 | 252 |
Money management | Industry | 10 | 50 | 480 |
Petroleum | Industry & Firm | 10 | 50 | 850 |
Industry & Firm | 50 | 79 | 850 | |
Pharmaceutical | Firm | 10 | 49 | 390 |
Security analysis | Industry | 10 | 64 | 497 |
As the table shows, managers were consistently and extremely overconfident. In the worst case, managers in computer companies were asked to specify 95% confidence intervals, so the true values should fall outside of the managers' specified ranges only 5% of the time. Yet the true values were outside of the managers' ranges 80% of the time.
Here are some consequences of overconfidence in PPM:
- Managers have too much confidence in their estimates of revenues, costs, probabilities, project attributes and other qualities of projects. Their project evaluations have much greater errors and their projects have much greater risks than managers estimate.
- Monte Carlo analysis does not solve this problem. Rather, overconfidence causes Monte Carlo analysis to underestimate project and portfolio risk.
- Managers underestimate the number and frequency of project selection errors: canceling valuable projects (false-negatives) and funding poor projects (false-positives). As a result, project failure rates are too high and successes are too few. (See my discussion, "How erroneous data causes project selection errors."
- Project selection techniques have varying sensitivity to project evaluation errors and project risk. Generally, more complex techniques are more sensitive to projects' risks and evaluation errors. By underestimating risks and errors, managers may choose project selection techniques that are too sophisticated, causing avoidable project selection errors and destroying value. (See the section, "Reliability and project selection," in my discussion, "Where's the feedback?")
- Overconfidence affects scenario planning as well (as do other cognitive biases). Managers place too much confidence in scenario planning and too easily dismiss the possibility of experiencing unanticipated events.
An additional consequence is fundamental, but it requires some explanation. Current PPM practices provide only one criterion for selecting projects: maximizing portfolio value. However, there is another criterion, and it is a well-known important practice in decision analysis: investing to resolve uncertainty. The value created by resolving uncertainty can greatly exceed the value created by selecting projects to maximizing portfolio value, and when such situations occur, portfolio optimization is suboptimal.
One can ask, "When should one invest to resolve uncertainty instead of selecting projects to maximize portfolio value?" As uncertainty and risk increase, investing to resolve uncertainty becomes more valuable while selecting projects to maximize value becomes less valuable. Overconfidence, which leads managers to underestimated uncertainty, causes managers to favor portfolio optimization, but often it may be the wrong choice.
1 Cha, M., B. Rifai and R. Sarraf (2013), "Pharmaceutical forecasting: throwing darts?" Nature Reviews Drug Discovery, 12(10), pp. 737-738.
The answers to the questions are: (1) 302,962; (2) 95; (3) 92,389,023; (4) 2,378,827; (5) 103,890; (6) 4,712; (7) 5,757; (8) 9,980,000; (9) 17; (10) 215,440,000.
After reading my discussions, many managers wish to share their experiences, thoughts and critiques of my ideas. I always welcome and reply to their comments.
Please share your thoughts with me by using form below. I will send reply to you via email. If you prefer to be contacted by phone, fax or postal mail, please send your comments via my contact page.
© 2014 Pipeline Physics. All rights reserved.