by Melissa Winter
| August 27, 2015
Original Post Date: June 2, 2015
At PRICE Systems, we always tout that calibration of past actual cost and schedule data is the best way to get to an accurate, data-driven estimate. I received a call recently to see if we have a set of best practices for calibration, but in reality the calibration process itself is a fairly simple process, it’s the data collection and normalization that happens before the calibration step that is really important. This is partly because the assumptions you make when collecting the data and building the calibration file should drive how you use the data in future models. And the more consistency you can have, the more accurate you should be.
Probably the simplest and most common form of calibration is Product Level Calibration to Unit Production Cost. This is where you know the Unit Cost of an item (and weight), and you calibrate Manufacturing Complexity to get the appropriate Unit output. You can also address the total number or units produced to set a range on the learning curve, and apply similar assumptions as you go forward with the future estimate. Of course, if you can perform unit production cost calibrations on many related systems, you can apply relationships within the data points to fine tune your future estimates as well.
A number of estimators have also asked me specifically about calibrating space projects, where it is common to model the space flight units as prototype units. If you model a project as all development costs, you must calibrate to development activities. You may only be able to collect total development cost by component or subsystem, which will likely include Development Engineering, Development Manufacturing, and Development Tooling at Test. This case will require a slightly more complex approach (than calibration to Unit Production Cost), and I would like to share it with you here.
In this example, you would attempt to gather and normalize historical cost data from various projects in a comparable formats, and model the historical projects using as detailed a description as you can (similar to any other type of calibration). If you have to calibrate to total development cost (which would include Development Engineering, Development Manufacturing, and Development Tooling and Test activities), you want to make sure and address all of the major development cost drivers; obviously Weight of Structure and Electronics, Manufacturing Complexities, and Operating Specifications, but also Percent of New Design and Engineering Complexity, which will have a significant impact on Development Engineering Costs (ideally you will have some data or knowledge to support inputs for these parameters for the past projects). A common metric for Percent of New Design may be based on the number of drawings that need to be modified for the project, or you if you don’t have such detailed information, you may use a guideline or scale; major modification to an existing design = 70% New, minor modification to an existing design = 30% new, a “copy” of an existing design = 10% new, etc. You could create a similar scale for Engineering Complexity to reflect the experience of the team or organization with building similar products. These scales will help you create commonality through various calibrations. And of course once you have completed the historical calibrations, make sure to consider the same types of details when going forward with your future estimates.
There must always be a consideration for the base year of the calibrations and escalation tables, especially if the data points come from a wide time frame. If you have the data in terms of labor hours, this can remove the variable of inflation, which can simplify the process significantly.
The moral of the story is that the best practice for a certain case will depend on what data is available and what you want to do with the data going forward. What standards do you consider best practices for calibration?