Where IT Metrics Go Wrong: 13 Issues To Avoid
In a recent Forrester report — Develop Your Service Management And Automation Balanced Scorecard — I highlight some of the common mistakes made when designing and implementing infrastructure & operations (I&O) metrics. This metric “inappropriateness” is a common issue, but there are still many I&O organizations that don’t realize that they potentially have the wrong set of metrics. So, consider the following:
- When it comes to metrics, I&O is not always entirely sure what it’s doing or why. We often create metrics because we feel that we “should” rather than because we have definite reasons to capture and analyze data and consider performance against targets. Ask yourself: “Why do we want or need metrics?” Do your metrics deliver against this? You won’t be alone if they don’t.
- Metrics are commonly viewed as an output in their own right. Far too many I&O organizations see metrics as the final output rather than as an input into something else, such as business conversations about services or improvement activity. The metrics become a “corporate game” where all that matters is that you’ve met or exceeded your targets. Metrics reporting should see the bigger picture and drive improvement.
- I&O organizations have too many metrics. IT service management (ITSM) tools provide large numbers of reports and metrics, which encourages well-meaning I&O staff to go for quantity over quality. Just because we can measure something doesn’t mean that we should — and even if we should measure it, we don’t always need to report on it. The metrics we choose to disseminate should directly contribute to understanding whether we’ve achieved desired performance and outcomes.
- We measure things because they’re easy to measure, not because they’re important. I&O organizations shouldn’t spend more time collecting and reporting metrics than the value we get from them, but that still isn’t an excuse to just measure the easy stuff. The availability of system reports and metrics again comes into play, with little or no effort needed to suck performance-related information out of the ITSM tool or tools. Consider why you report each and every metric in your current reporting pack and assess the value they provide versus the effort required to report them. Not only will you find metrics that you report on just because you can (“they were there already”), you will also find metrics that are “expensive” to provide but provide little or no value (“they seemed like a good idea at the time”).
- I&O can easily fall into the trap of focusing on IT rather than business metrics. There is often a disconnect between IT activity and performance and business objectives, demands, and drivers. So consider your existing metrics from a business perspective: What does the fact that there are 4,000 incidents per month actually mean? From an ITSM perspective, it might mean that we’ve been busy or that it’s 10% lower (or higher) than the previous month. But is the business actually interested in incident volumes? If it is, does it interpret that as “you make a lot of mistakes in IT” or as “you’ve prevented the business working 4,000 times this month”?
- There is no structure for or context between metrics. Metrics can be stuck in silos rather than being end-to-end. There is also a lack of correlation and context between different metrics. A good example is the excitement over the fact that the cost-per-incident has dropped — but closer inspection of other metrics shows that the cost has gone down not because we’ve become more efficient but because we had more incidents during the reporting period than normal.
- We take a one-dimensional view of metrics. Firstly, I&O organizations can limit themselves to looking at performance in monthly silos — they don’t look at the month-on-month, quarter-on-quarter, or even year-on-year trends. So while the I&O organization might hit its targets, there might be a failure just around the corner as performance degrades over time.
- The metric hierarchy isn’t clear. Many don’t appreciate that: 1) not all metrics are born equal — there are differences between metrics, key performance indicators (KPIs), critical success factors (CSFs), and strategic objectives; and 2) metrics need to differentiate between a number of factors, such as hierarchy level, recipients, and their ultimate use. Different people will have different uses for different metrics, so one-size reporting definitely doesn’t fit all. As with all reporting, tell people what they need to know, when they need to know it, and in a format that’s easy for them to consume. If your metrics don’t support decision-making, then you’re suffering from one or more of these listed issues.
- We place too much emphasis on I&O benchmarks. The ability to compare yourself with other I&O organizations can help show how fantastic your organization is or justify spending on improvements. However, benchmark data is often misleading; one might be comparing apples with oranges. Two great examples are cost-per-incident and incidents handled per-service-desk-agent per-hour. In cost-per-incident, how do you know which costs have been included and which haven’t? The volume, types, and occurrence patterns of incidents will also affect the statistics. The incident profile will also affect incidents handled per-service-desk-agent per-hour statistics.
- Metric reporting is poorly designed and delivered. I&O professionals can spend more time collecting metric data than understanding the best way for it to be delivered and consumed — it’s similar to communications per se where a message sent doesn’t always equate to a message received and understood. You can also make metrics and reporting more interesting.
- We overlook the behavioral aspects of metrics. At a higher level, we aim for, and then reward, failure — we set targets such as 99.9% availability rather than saying, “We will aim for 100% availability, and we will never go below 99.9%.” At a team or individual level, metrics can drive the wrong behaviors, with particular metrics making individuals act for personal rather than corporate success. Metrics can also conflict and pull I&O staff in different directions. A good example is the tension between two common service desk metrics — average call-handling time and first-contact resolution. Scoring highly against one metric will adversely affect the other, so for I&O professionals to use one in isolation for team or individual performance measurement is potentially dangerous to operations and IT service delivery.
- I&O can become blinkered by the existing metrics. When your organization consistently makes its targets, the standard response is to increase the number or scope of targets. But this is not necessarily the right approach. Instead, I&O execs need to consider whether the metrics are still worthwhile — whether they still add value. Sometimes, the right answer is to abolish a particular metric and replace it with one that better reflects your current business needs and any improvement or degradation in performance that you’ve experienced.
- Metrics and performance can be easily misunderstood. A good example is incident volumes — a reduction in incident volumes is a good thing, right? Not necessarily. Consider this: A service desk providing a poor level of service might see incident volumes drop as internal customers decide that calling or emailing is futile and start seeking resolution elsewhere or struggling on with workarounds. Conversely, a service desk doing a fantastic job at resolving incidents might see an increase in volumes as more users reach out for help. Thus, I&O leaders need to view customer satisfaction scores in parallel with incident volume metrics to accurately gauge the effectiveness of a service desk.
Finally, consider this: In the wise words of Ivor McFarlane of IBM: “If we use the wrong metrics, do we not get better at the wrong things?”
Does any of this sound far too familiar? What did I miss? As always, your comments and opinions are appreciated.
Related blog: http://blogs.forrester.com/stephen_mann/11-10-14-it_service_management_metrics_advice_and_10_top_tips
________________________________________________________
If you enjoyed this, please read my latest blog: http://blogs.forrester.com/stephen_mann