• Predictive Analytics for Improved Cost Management  



Blog



Original Post Date: Thursday, October 7, 2010 Ahhhh, the 80s… a challenging (but often confusing) time in an evolving computing world.  Working in 1985 as a software estimator as well as SQA engineer in a quality assurance department that “audited” real-time projects using new concepts like OOD & OOP… well, you get the picture.  It was a great time to get immersed into great work.  And the good news:  that company’s process as well as its developers were bullish on a young estimation/ quality types asking plenty of questions… as long as they were of the Yes-No variety.  And ...
Original Post Date: Monday, December 6, 2010  In his August blog-entry here, Zach Jasnoff outlined typical client perspectives for the different types of analyses that TruePlanning can accommodate. Working on a large project, we’ve experienced situations that, realistically, can happen where the initial intent and model structuring later have the boundaries of model appropriateness stretched. An Analysis of Alternatives (AoA), for example, is meant to measure deltas between baseline and its alternatives. If common costs “wash” then they can be excluded… which becomes an issue when treated as a Rough Order Magnitude for customer budgeting.  Likewise, if a ROM or Independent Cost Estimate ...
Original Post Date: Thursday, August 12, 2010 The late Norm Crosby’s “Quality is Free” taught us that an investment into quality is more than offset by prevention of defects based upon understanding of requirements. Only with the latter can lack of conformance (and subsequent costs) be captured and hence quality quantified. So how then is Parametrics relevant?  Parametric estimating is more than cost modeling. Our craft represents an initial consulting function into the accuracy and completeness of program planning concepts. Our customers trust us to know when to ask and when to supplement. Yes, we are mathematical and financial modelers too. But I’d suggest that “Parametrics is ...
Am I correct in understanding that in order to determine Payload PM, SE, MA [e.g., WBS 5.1, 5.2, 5.3], Payload I&T/GSE [e.g., WBS 5.5, 5.6], Spacecraft PM, SE, MA [e.g., WBS 6.1, 6.2, 6.3], and Spacecraft I&T/GSE [e.g., WBS 6.6] costs, each user must export TP results into an Excel spreadsheet, then apply their own factor to obtain the aforementioned WBS costs because TP lumps all of those costs into 1 PM bucket, 1 SE bucket, 1 MA bucket and 1 I&T bucket? And we have to do this every time we make a change to determine those costs? Gosh ...
From the presentation, it looked like the TP risk area included the capability for percent inputs around the point inputs rather than having to enter value? Yes, we’ve added Percent (as well as offset) to the FRISK input method to set pessimistic and optimistic values off the point values.  In the 1st example below I’ve used +20% and -10% respectively, around the Weight of Structure’s point’s value.  Per the 2nd example below, we can do likewise in our Monte Carlo companion applications, where our new custom logic satisfies NASA’s typical approach for mass growth-risk with Optimistic=CBE {i.e., your point value}, ...
Does any of the Space catalogs (objects) cover the launch vehicle and stage 1 & 2 engines?  Do the defaults cover manned space? Not, not soon at least.  HOWEVER, we and multiple NASA Centers have used the True-Hardware/Software catalogs for many years to estimate launch vehicles and manned vehicles.  Currently the Space Missions Catalog objects (i.e., estimating models) support robotic and unmanned missions.  But again, in the TruePlanning catalogs (HW, SW, Systems, etc), there has always been an Operating Specification choice for manned space per below— To watch the "Best Practices using the TruePlanning Space Missions" Webinar, click here.
Will other recent programs be added (to basis/analysis/CERs)? Yes, typically within one year of launch. We don’t often need to make changes to the component estimates but we test them as new data comes in. The support functions get minor tweaks each time we add another data point. To watch the "Best Practices using the TruePlanning Space Missions" Webinar, click here.
Do you plan to add MAVEN any time soon? Just learned that yes, Maven will be added to the basis & analysis in the near future. To watch the "Best Practices using the TruePlanning Space Missions" Webinar, click here.
In your webinar presentation-- Slide 21: NASA WBS mapping for the WBS 4.0 – this slide seems to indicate that this WBS is estimated by TP Slide 35: Space Systems Object – this slide seems to indicate that this WBS is estimated by TP Slide 48: Mapping Rules.. – this slide seems to indicate that this WBS is estimated by TP, but Notes column this very slide are contradicting this – by saying typically passed-thru what would be right approach; shall we have a pass-thru number for this WBS (4.0) or shall we assume that the TP estimate includes this WBS? Good catch!  ...
If you look at slide 68 of the presentation (Reference Mission Set Used in the Chicago Development Cost Model), it says that the data sets includes “actual costs for completed projects and projected costs for projects in development and near-term mission candidates”. Does this mean that historical actuals were used and there was no normalization of the data?  Does the actual costs include contractor fee and any subcontractor burdens or were those stripped out?  If contractor fees or burdens were removed, how do we go about adding that to the model estimate?  Is there an input parameter for that ...