Being that software measurement is a topic of great interest, I will occasionally look to Google for inspiration and trends.  Often these searches will return several of my own papers and it’s fun to revisit them and think about how much as things change, they remain the same. Recently, I stumbled across a paper I had written many years ago – it was in fact the second or third article that I had published in a magazine (see the article here).  I read through it briefly and was really struck by how relevant it still is today.  The paper is entitled “Software Measurement – What’s in it for me?”  and its focus is on the things that make it hard to be successful with a measurement program intended to create a better understanding of productivity in software development teams.

I am, in fact, currently involved in several consulting engagements focused on data collection (cost, effort, and technical data) for software projects and its uncanny how little things have changed.  The issues that we come up against as we are launching measurement programs include:

  • Gaining the trust and support of the software development team that we are expecting to do the data collection.
  • Convincing the data collectors that there are benefits to be had from successful measurement program.
  • The need to create automation to ease the data collection headaches and ensure that data collection is consistent and repeatable.
  • The need for effective communication to ensure that definitions of data elements are clear and consistent across the team.

I guess the obvious, and not surprising conclusion, is that because the biggest obstacles to a successful software measurement programs are the human issues not the technical issues. And since we continue to rely on the human element in our projects for much of the data understanding, collection and analysis – we will continue to work to find a good balance between what measures are needed for prediction, which are practical to count and how to make sure that these measurements are used for good (improved understanding of software development productivity) and not for evil (unfairly using measurements out of context against individuals or teams).