Original Post Date: Tuesday, June 15, 2010 

I recently read a great paper by Glenn Buttes and Kent Linton, NASA’s Joint Confidence Level Paradox – A History of Denial.   In it, the authors present a  very detailed analysis of many failed NASA projects along with some compelling theories on why so many projects fail and what can be done going forward.  While I’m not here to summarize their findings – interested parties can hit the link above and learn for themselves, there was one extremely interesting jewel in this paper that I felt the need to share.

The reason I think it’s important to share this is that so many of us in the cost estimating community rely heavily on expert judgment as a means to perform or validate estimates.  On page 48 of the paper a section entitled “Experts have a High Opinion of Their Own Opinions” begins.  In this section the authors describe an experiment where researchers took a group of smart people (Harvard Business School students) and asked each to estimate high/low range numerical answers to several questions in such a way that they had a 98 percent chance of being correct and a 2 percent chance of the correct answer falling outside the range they selected.  So for example “I am 98% confident that tomorrow’s temperature will be between 50 and 120% F”.  There were no limitations on the ranges they could select and yet the students failure rate was close to 45%.  Similar studies have had similarly lackluster results.  To paraphrase the authors’ conclusions….

“We over-estimate what we really know while underestimating the possibility of our being wrong.”

The author is quick to point out, and I completely agree, that this is not evidence that all expert judgment is not valid, just a warning to those who depend exclusively or significantly on expert judgment.  No estimation should be done in a vacuum.  The more methods (parametric cost estimation included) used to arrive at an estimate the more credible the estimate and the higher the confidence level in that estimate.