What happens when users are dissatisfied with the results of outsourcing efforts? Whether it is outsourced software development or other IT functions, dissatisfied users implement “shadow efforts” to recreate the functionality they need out of the outsourced software development projects, sometimes with simple solutions like spreadsheets.
An article in the May 2005 issue of the online magazine Enterprise Systems captures this very well. It talks about what is usually referred to as “skunkworks” projects that surreptitiously reach out to laid-off employees or ex-employees who are now working for outsourcers such as IBM or EDS to get some temporary solution put in place!
End user satisfaction is crucial for the success of outsourced software development. You can get all the other aspects right, but if users aren’t satisfied, the whole effort is a waste. Outsourcing decisions may have been made without too much consultation with users, but they still need to get their job done everyday with or without the help of the software provided by the service provider. Sometimes companies reverse their decisions completely, as in JPMorgan Chase’s decision in 2004 to abandon a $5 billion contract with IBM to bring all main IT functions in-house again. Paying attention to user satisfaction is to the benefit of both the service provider and client.
In my February 2007 column, “Metrics for Outsourced Software Development,” I outlined three main areas of attention: people, technology and process metrics. End user satisfaction is a key qualitative process metric. This month, I elaborate upon the different criteria that can be applied in designing meaningful user satisfaction surveys. Not all criteria may be applicable in all types of outsourced software development projects. Depending upon the particular software development project, appropriate ones can be chosen from this “menu.”
End User Satisfaction Surveys
End user satisfaction surveys for outsourced software development projects have three important components to them:
- Evaluation criteria
- Measurement scales
- Open ended feedback
Outsourced software development projects can be quite different from each other: a software application vs. outsourced software product development; client/server applications vs. server-based applications with browser interfaces; business-oriented vs. consumer-oriented (Web 2.0-like) software. Depending upon the nature of the application, one or more of the following criteria may be chosen for user feedback through a survey:
User Interface Ease of Use
How easy or difficult is it for the software to be used by those who haven’t been involved directly in its development? Do the users find it easy to understand and navigate the interface? Are the interface elements intuitive and easy to understand?
Ease of Workflow
Do users find it easy to accomplish their work easily using the software? This is especially important for browser-based interfaces. Browser interfaces are inherently hard to design and implement as opposed to client-based interfaces that have a richer variety of components like dialog boxes, pull-down menus, etc. This usually makes complex workflow harder to design and implement using browsers.
Does the application behave consistently from the users’ point of view? It’s not uncommon to hear feedback from the users about inconsistencies in how they accomplish different tasks. This can be discovered only by users and only after repeated use of the software.
This is particularly applicable for applications with browser-based interfaces. Unlike client-only applications, those that involve clients and servers (especially with browser client interfaces) may need to be evaluated by clients with respect to responsiveness. This will enable the service provider to separate possible network bandwidth issues that contribute to lack of responsiveness from application-related latencies. The 80/20 rule may apply here. Only 20 percent of any application is used 80 percent of the time by the users. Isolating responsiveness problems to these enables the service provider to tune only these parts of the application. Obtaining feedback – especially open-ended comments – about which parts of the application seem to be lacking responsiveness will help in this.
How helpful do the users feel the application is for the decision support needs in their work? This kind of question, combined with an accompanying area for open-ended feedback, can unearth requirements that users may not have thought of the first time around.
When key users haven’t been fully involved or engaged at requirements-gathering time, software requirements may not have been fully unearthed. Asking questions about completeness along with an accompanying area for open-ended feedback at least midway or earlier during the effort can ensure that missing requirements are addressed during the course of the development effort.
A typical end user satisfaction survey question may use a Likert Scale to quantify the level of satisfaction or dissatisfaction along different criteria. The question might pose a statement and ask the respondent whether they Strongly Agree – Agree – Undecided – Disagree or Strongly Disagree. This is a 5-point Likert Scale. Sometimes, a 7 or 9 Point scale may also be used. Traditional statistical measures like Mean, Median and Mode are used for each criterion and inferences drawn about how users feel about different aspects of the software.
Open-ended feedback can be much more useful than quantitative measures such as the Likert Scales, especially with a variety of users. Requirements for (or frustrations with) specific parts of the delivered software can be unearthed early and fixed with open-ended feedback from users. In addition, allowing users to complete surveys anonymously will only enhance the quality of feedback obtained, especially with open-ended questions.
Frequency of End User Surveys and Agile Methods
In practice it may not be possible to conduct user satisfaction surveys more than a couple of times during a software development project. If it is conducted, then it makes sense only if the feedback obtained can be used to make course corrections addressing specific criteria that scored poorly. Agile methods provide a natural way to accomplish this without a lot of effort. Multiple releases involving users right from the beginning ensure that user feedback is obtained early and often – even if in a form that’s less formal than a survey. Releasing versions to users frequently, obtaining their feedback and using that feedback to make course corrections may work better than non-agile software development situations. However, given the problems in structuring outsourced software development contracts with agile methodologies, satisfaction surveys – if administered at least a couple of times – can contribute to the success of the effort.
Users are the reason for any software development effort. Whether you’re a provider of outsourced software development services or a buyer of such services, hearing too much negative feedback from users is the same as hearing nothing! If they think that the delivered software does not help them do their jobs easily, users will design and implement their own temporary workarounds, which become permanent workarounds over time, End user satisfaction surveys when applied even a few times during the software development effort will ensure that course corrections are applied earlier and that the whole effort succeeds.
Enterprise Systems’ “Who Benefits from Outsourcing? Not the Line of Business”:
The Wall Street Journal’s “Behind Outsourcing; Promise and Pitfalls”