Your software development effort has been outsourced to a CMM Level 5 vendor. You think you have strict project management controls and disciplined measurement and reporting of metrics. After all, you have many Gantt charts, graphs and dashboards to prove it. Yet your software development is still in deep trouble and your users are unhappy with the software releases you’re getting. This scenario is more common than you might imagine. The question to ask isn’t whether you’re measuring a lot of things right, but are you measuring the right things?
Managing an outsourced software development effort is different from managing an in-house effort. You’re dealing with another company whose objectives are different from yours. Even if it’s your own captive facilities in another country, their objectives and goals may not be exactly the same as yours. Defining a proper set of metrics is often a good place to start to make your outsourced software development succeed. Once you’ve determined what the metrics ought to be, you can use that as a roadmap for measurement, analysis and improvement of weak aspects of the effort in a systematic way.
This month I outline a framework of metrics that can be useful in the modeling, measurement, analysis and improvement of outsourced software development efforts. Depending upon the nature of the software being developed, the different metrics may take on different weights. For example, if the software is highly technical, the design metrics may take on more weight than efficiency measures. If it’s web site development, the communication metrics may be more important than modularity of code metrics.
In future columns, I’ll use this framework as a roadmap for discussing how agile methods address each aspect of outsourced software development to head off potential problems.
A Metrics Framework for Outsourced Software Development
In my research into metrics, I’ve found a variety of approaches. From this research and our own software development experiences, I’ve derived a framework for measuring outsourced software development and many different types of software projects. The broad classification that seems to work for any type of organization seems to be one of people, technology and process metrics. You’ll also see that many metrics are interrelated. Good or bad performance on one dictates good or bad performance on others.
Personnel turnover isn’t usually much of a concern with in-house software development teams or teams with domestic outsourcing partners. But it rises to the top of the pile when the software development effort goes offshore. In some countries – particularly India – personnel turnover is mentioned as one of the most troublesome issues in outsourcing. Within that, project leadership turnover is much more harmful to the success of the effort than engineer turnover. Good project leadership skills are rarer than software programming skills in countries like India. Consequently, losing a project leader more severely affects the project than losing software engineers. Leadership turnover and engineer turnover are good people metrics to track.
Because of concerns over turnover, new recruitment and training metrics can be useful in tracking development of both new project leadership and software engineers. Having a constant pipeline of recruits for both project leadership and software engineering roles is critical in making sure turnover doesn’t impair development efforts.
Technology metrics are also crucial ingredients of good outsourced software development management. Almost always, integration of a company’s issue tracking, bug-tracking and code management systems with those of the service provider may be involved. Service providers may use the client’s internal software management systems through a browser or a dedicated network link interface, or they may have their own systems, networks and applications – all of which may need to interoperate.
Technology metrics such as system uptime, network uptime and support applications uptime need to be measured so that software development can proceed without interruptions. In some cases, responsibility for technology metrics may lie with the IT organization within the client firm. If this is the case, the metrics are still worth tracking, since almost all software development these days depends upon delivery over a network, whether it’s a proprietary global network or the public Internet, with or without dedicated high speed lines.
Process metrics are, by far, the most comprehensive ones to be used since they cover all aspects of software development, including overall project management, design and quality assurance. A broad classification of process metrics might consist of qualitative and quantitative.
Qualitative metrics are rarely collected at all, but if they are, they typically come through user customer satisfaction surveys. Agile methodologies naturally accommodate this, albeit informally, with frequent releases and feedback cycles. But the numbers are worth formally collecting and analyzing also. A customer satisfaction score metric can be used multiple times at various stages of the project to make sure that users are happy with the way the software development work is progressing.
Quantitative metrics deal broadly with efficiency and effectiveness. Efficiency measures whether you’re doing things right, but effectiveness measures whether you’re doing the right things. Of the two, efficiency metrics are usually the only ones measured; but effectiveness measures are the ones that determine the real success of software development efforts.
Efficiency measures deal with time-related (project schedule) or numbers-related (programmer productivity like number of lines of code) aspects of software development.
Effectiveness metrics deal with outcomes of software development efforts such as communication and overall project management, as well as effectiveness of individual software development phases like requirements gathering, design, development and testing/quality assurance. Communication effectiveness can be gauged by the number of times there were miscommunications and course corrections in the project. Overall project management effectiveness can also be measured by how many times you have project schedule changes, delays and extensions.
Design effectiveness can be measured by design extensibility or the degree to which the current design can be built upon in the future without radical changes. Development effectiveness measures things like coding guidelines adherence, maintainability, reliability, security, generation or use of reusable code, use of third-party controls and modularity of code. Depending upon the nature of the software development effort, these can be prioritized. Quality assurance effectiveness measures how well the QA team has been able to identify defects and keep them out of successive releases, as well as the number of defects reported by users and number of regression defects (defects fixed but that have crept back into the code).
Efficiency metrics are the most commonly used in software development. Requirements efficiency can be measured by requirements coverage (how many requirements were generated and what percentage of them is covered by each successive release). Design Efficiency can be measured by design time adherence.
Development efficiency measures gauge team and individual programmer productivity (such as number of lines of code or function points).
Quality assurance efficiency can be sized up by measures like defect identification and defects flow (initial number of defects, new defects identified, number identified as fixed and verified and ending number of defects). These measures can indicate whether we have a consistent flow of defects being identified, fixed and verified as fixed. Finer quality measures in testing, such as unit testing, stress testing, regression testing and load testing, may be appropriate for the software project also.
Implementing Metrics for Outsourced Software Development
The framework identifies all possible metrics for outsourced software development. However depending upon the nature of the software being developed, the composition of the teams and where the work is done, appropriate metrics could be chosen from this comprehensive list.
To develop a roadmap for your project, follow these steps.
Identify and Document Metrics
For each software development project outsourced, discuss and identify key process, people and technology metrics using the framework discussed in the previous sections. The framework will ensure that the set of metrics identified is comprehensive and addresses all appropriate aspects of the software development project in question.
Prioritize Process, People and Technology Metrics
Prioritizing process, people and technology metrics helps in striking the balance between the utility of the metrics monitored vs. the effort, time and money expended on instituting the monitoring and analysis.
Determine Monitoring and Analysis Periodicity
Prioritizing the metrics also helps in setting how often to do reports for each of the metrics. For example, high priority ones may be monitored weekly or every 15 days, while lower priority ones may be monitored monthly.
Assign Responsibility and Implement Reporting
Responsibility for measuring and reporting the metrics may lie with the buyer or the vendor. This should be part of your initial discussions with service providers and should be spelled out in an appendix within your outsourcing contracts.
Outsourcing and offshoring are risky ventures that promise a lot of benefits to organizations. These benefits may be quickly lost if the right metrics aren’t used diligently. The cost of outsourcing failure is high to both the organizations and individuals who make the decisions to outsource or offshore their software development. The framework presented here offers a disciplined approach to reducing these risks.