Original Post Date: Wednesday, October 17, 2012

A frequent question from students and consulting clients is how to estimate software size when either:

  1. detailed functional requirements descriptions are not yet documented or, even if the latter do exist,
  2. the resources necessary (in cost and time) for detailed function point (“FP”) counting are prohibitive.

If appropriate analogies or detailed use cases are not available, fast function point counting can be a non-starter, without nominal understanding of pre-design software transactions and data functions.  Hence, the challenge is to find an estimating basis for functional measure (i.e., size) in advance of the fidelity required by the standard FP counting methods, as defined by the International Function Point Users Group (“IFPUG”).

I was pleasantly surprised to learn from a colleague in our European office that a solution exists to this frequent issue… and we’ve since put it into practice with a complex project, assisting a customer with sizing that fit immediately as function points into TruePlanning® software component objects, for new and adapted code.  The take-away:  uncertainty in requirements definition and classification is not a show-stopper.  IFPUG has, in fact, been working on a supplemental approach since 1997, when it first introduced its “Early & Extended FP” method. 

Re-released in 2006 as the Early & Quick Function Point (“E&QFP”) method, the concept extends the IFPUG standard approach of counting and classifying, for each software component’s set of explicit requirements, the number of logical transaction processes and data storage groups required.  {These two sets of base functional components (“BFCs”) are well-documented by IFPUG as five function types:  External Input (EI), External Output (EO) and External Query (EO) for transactions;  Internal Logical File (ILF) and External Interface File (EIF) for data storage.}

In the E&QFP approach, the counting of unadjusted function points (for later scaling by the Value Adjustment Factor) considers different “aggregation” levels of potential BFC identification for each set of transaction and data requirements constituting current understanding of software components.  In other words, the FP process can now consider various levels of clarity for transaction and data needs.

For reference, consult the “E&QFP Reference Manual 1.2 for IFPUG method, Release 3.0” published in September, 2009 by the Data Processing Organization (DPO) in Italy.  {I was very happy to learn that this Release 3.0 was based on analysis of ISBSG data—which PRICE is currently using in its calibrations of our core TruePlanning® estimation algorithms.}  In this manual are excellent definitions of four aggregation levels each with their own taxonomy of function types (based on the possible degrees of identification/ classification of software component requirements into BFCs), as well as tables of unadjusted FP values for the respective transaction and data function types, each measured at minimum, most likely and maximum uncertainty points. 

Rather than elaborate on each level, I’ll just provide an overview here.  The 1st aggregation level maps identically with the IFPUG functional component types above.  The 2nd aggregation level supports the cases where insufficient details prevent classifying requirements into identifiable IFPUG BFCs, or if the type identification is possible, but the assignment of IFPUG complexity is not.  At this level, generic type assignments for transaction and data requirements are made.  At the 3rd aggregation level, when even the latter is not possible for identifying specific BFCs for a specific proposed software component, the assumption is transactions can be characterized as groups of undefined BFCs in either “typical” Create-Read-Update-Delete processes or “general” unclassified elementary processes.  For data, the 3rd level counts the groups of unclassified logical files, rather than individual generic ILFs/ EIFs from the 2nd level.  The 4th level assumes none of the previous classifications are possible, taking a macro-level count of general process groupings from the 3rd level;  data is assumed inherent.

In my first implementation, our project utilized an initial Software Requirements Specification (SRS) and separate Interface Design Document (IDD).  The latter’s interface definitions allowed us to count threads and schemas, and map them to use cases.  However, for the SRS core CSCI components, we used E&QFP to individually size each of multiple “the system shall” functional requirements.  Using the structured approach provided in the case study examples of DPO’s Workbook, we were able to count modeled processes and data groups, at the 1st, 2nd and 3rd aggregation levels, for the following categories:  business objects, translation repository, data object access, message access, persisted access, rules/ scenarios, SOA architecture and data transfer services.  Our master workbook became a reusable cost-impact design-tool, where (thanks to the magic of vlookups and sumifs), we could interact with engineering/ management to quickly model sizing impacts, and via Excel-interface with TruePlanning®, cost/ schedule implications. 

PRICE’s Zac Jasnoff wrote a blog in October 2010 regarding the use of PRICE tools in developing dynamic models for ICEs, as an example of addressing the September 2010 memorandum by Ash Carter that challenged our community to increase efficiency in Defense acquisition by, amongst other actions, “targeting affordability and control cost growth.”  I like to think that our use of the relatively-new EQ&FP approach, combined with the TruePlanning® unified framework (where we also modeledeach CSCI’s functional complexity), was a great example of answering that challenge!