How Organizational Structure Effects Software Quality: Learnings From Windows Vista

Monday, December 8, 2008 – 9:28 AM

When Nachiappan Nagappan from Microsoft Research (MSR) showed some of the results from his paper at the p&p Summit I pretty much had to check it out.

– Nachiappan Nagappan, Brendan Murphy & Victor R. Basili

The MSR team examined the failures in Windows Vista and correlated them to the structure of the organization. It’s a pretty dense read but some of the conclusions are interesting.

Our organizational measures predict failure-proneness in Windows Vista with significant precision, recall and sensitivity. Our study also compares the prediction models built using organizational metrics against traditional code churn, code complexity, code coverage, code dependencies and pre-release defect measures to show that organizational metrics are better predictors of failure-proneness than the traditional metrics used so far. 

This doesn’t mean that the other metrics don’t have value but for large projects organizational structure may actually be a more important predictor than, say, code coverage or complexity.

This research isn’t the sort of thing you probably have time to do in your organization as it involved correlating organizational data with information from the source code control and bug tracking systems. That doesn’t mean you can’t do something similar in a much lighter weight fashion.

In addition to the metrics you may already be gathering like code churn, complexity and coverage the following questions might be worth pondering if you work on a large or medium sized software project. For each feature/component or area of the code base ask yourself:

  • How many engineers you you have working it? The more engineers the more chance there is for mis-communication and errors.
  • How many engineers worked on that code but have now left the project? People leaving the project equates to knowledge leaving the project. Newer people being assigned to work on the code may not have the same level of understanding and inadvertently introduce defects.
  • How many times was the code edited? Large numbers of edits may be an indicator of stability issues. Are the people who made most of those edits still working on the project?
  • How cohesive are contributions to the code base? Are contributors to the code base working together or are the scattered across the organization.

These questions are based on the metrics used by the team. Further details, more questions and reasoning behind why these are important can be found in the paper (section 4).

  1. 1 Trackback(s)

  2. Oct 19, 2009: Influence of Organizational Stucture on Software Quality « Second Opinion

Sorry, comments for this entry are closed at this time.