by Arlene Minkiewicz
| September 26, 2014
Proposal estimates based on grassroots engineering judgment are necessary to achieve company buy-in, but often are not convincing or not in sync with the price-to-win. This contention can be resolved through by comparing the grassroots estimate to an estimate developed using data driven parametric techniques. Parametric estimates apply statistical relationships to project data to determine likely costs for a project. Of course, for a parametric model to properly support this cross check of the grassroots estimate, the proper data must be fed into the model. This most likely requires the estimator to reach out to various subject matter experts.
Before reaching out to those subject matter experts, first read this blog post by Robert Stoddard of the Software Engineering Institute (SEI) on a research effort at the SEI of Quantifying Uncertainty in Early Life Cycle Cost Estimation (QUELCE). The methodology they are developing relies heavily on domain expert judgment. One of their first challenges they took on in the process was an effort to improve the accuracy and reliability of expert judgment. As a jumping off point they relied on the works of Douglas Hubbard the author of “How to Measure Anything”.
The technique they adapted is referred to as “calibrating your judgment”. Experts are given a set of questions and asked to provide an upper and lower bound such that they are 90 percent certain the answer falls within the bounds. Feedback shows whether they are too conservative (always right because they set the bounds too large) or overly optimistic. Hubbard’s research indicates that most people start off being highly overconfident but through repeated feedback of the process become better at applying their expertise realistically.
This is a very interesting study and I think that anyone who is relying on experts to guide important bid and proposal decisions should think about the confidence (or over confidence) of their subject matter experts.