top of page

Customers “Get” What You Measure

Human Behaviour 101

“You get what you measure”, is among the oldest sayings in business. Unfortunately, numerous traditional measures undermine the customer experience. However, in this era of new customer experience tools, many of the measures that are possible have changed. We think that a lot more is possible. Why? The majority of businesses we work with have embraced post interaction surveys and NPS measurement. The advent of email, IVR and SMS surveys has meant that the potential quantity and use of direct customer feedback provides far greater potential to measure the customer experience. Despite this, we still find that other measures in the customer operations haven’t been updated or moved on and drive bad experiences. Too many KPIs and measures seem stuck in the processes of the 1990s. This paper will discuss some of the measurements that we find inappropriate in the customer facing parts of business and how they can be replaced with more effective KPIs.

Lucky Dip Quality Checks?

Various organisations use After The Fact Quality Checks to sample calls and processing work. They sample somewhere between five and ten call or work items a month in an effort to encourage process adherence. Common flaws in this process include:

  • Five to ten a month from a base of 1000 calls or items has no statistical validity and is merely a “lucky dip”

  • Quality teams use random sampling and therefore sample as many simple contacts as complex even though they are often looking for areas where the person “needs” coaching. Sometimes they seem more driven by their own productivity than the value of the sample.

  • A single negative quality score can have financial impact and agents have limited recourse so this leads to a “fear” of quality and mindless process adherence e.g. asking for information on a repeat call that was asked for only yesterday or the day before, “because that’s what quality says”

  • There are often disputes about the process and assessment because the assessment may be subjective

  • The intention of the checks is to enforce the right process, but often they enforce a bad process. For example, one client has a check of “empathy” thus agents routinely asked customers, “how’s your day going?” regardless of the

  • customers situation or the time of day (see cartoon). Even if a customer complained about wait times or the need for the call, the agent would still ask “how’s your day going?”

  • The samples are not used for any other purpose even though collectively, they form an interesting snapshot of work

  • They check work after the fact so agents continue to operate even though they may have significant training needs

What’s our alternative?

Sampling processes can be very different depending on the objective. If the goal of sampling is to find where agents need help and coaching then

the process should focus on long calls or complex work items. These are far more likely to involve difficult or poorly managed processes and are likely to expose more significant risks. Customer feedback surveys enable “customer driven” sampling or, targeted sampling based on the customer’s rating. Rather than random sampling, quality can review where customers provided a low agent score or reported non- resolution. Random sampling can be replaced by customer targeted sampling. Many companies already have processes to review, or action a sub set of “detractor” contacts and hopefully fix the customers issue. This can form part of the feedback and improvement metrics for an agent.

Frequently in organisations, we reduce these After The Fact quality samples and replace them with real time observation and quality assessment. In this model team leaders spend time observing their team, (something we always recommend) and can assess their adherence to a defined process. They can score and give feedback in real time. A combination of customer driven quality and direct team leader observation is much better than a lucky dip measurement approach.

Whose problem is Handle Time anyway?

Average Handle Time (AHT) in contact centres (or admin areas) measure the average duration of a process such as a call or a process. It was one of the first measures of productivity used in contact centres and forms an important part of capacity planning and forecasting as it measures the duration element of ‘workload’.

Nearly twenty years ago organisations started to realise that it was not very effective as a measure of individual performance because it:

  • Encouraged staff to hurry contacts rather than resolve them or serve the customer

  • Created inappropriate behaviours (like transferring calls, forwarding emails or hanging up on

  • customers) to trick the measure

  • Built the impression that there was a 'right’ call or process length

  • Led to the belief that front-line staff could control all aspects of handle time. The front line had no

  • control of which calls they received, the process they had to follow or the technology they used. As a result, they felt frustrated that they couldn’t control the handle time outcome but were coached and measured on this metric. It should not have been a surprise that they tried to “cheat” the metric.

What’s our alternative?

We find that other measures of productivity cannot be gamed in the same way. Net calls per hour measures contacts minus transfers but doesn’t create the idea of a target duration. In back office we have seen “quality adjusted net items per hour”. These kinds of measures also avoid the idea that there is a right “duration” for contacts or processes. They can be aligned with process measurement to encourage the “right way” to do things which in turn generates the right handling time.

Hiding behind the average

Often organisations manage customer experience with a lot of averages. They may measure the “average turnaround time for a process” or “the average grade of service” for a contact centre for the month. We’ve seen these averages take on a purpose very different to the original intention. For example, if a contact centre is ‘ahead’ of the average target for a month, they take staff off the phone and reduce service levels for the remainder of the month. Customers who call late in the month get longer waits and a poor experience just because the centre is ahead of its goal. We’ve even seen managers start to over staff their operations to try and lift their “average” performance, even though this means that there are staff waiting for calls or work to do.

What’s our alternative?

We find that other measures that reflect the performance of the operation for all customers are far more effective. For example, measuring the number of intervals where target service level range is achieved produces much better experiences and management focus. If a centre achieves the target range of service levels (and the range excludes over servicing) on 95% of intervals, that’s a better measure. Measuring “abandonment” or average speed of answer as a target is also a better indicator of customers reaction and experience. In a back office measuring the number of customers who get a slow turnaround is also a more sensitive measure of the customer experience than the average turnaround.

Whose service level is it anyway?

On a typical flight the captain announces that “our flying time today is XX minutes”. That sounds good at first, but usually it doesn’t include the time to taxi to the run way (up to 30 minutes at some airports) and the time to, “get on a gate” the other end. The “real customer experience”, is measured from the time the customer arrives at the airport to the time they leave their destination. Airlines feel like they don’t control the airport experience so chose not to measure it and pilots conveniently ignore everything except the time in the air. Often companies measure service levels for parts of process in a similar way. For example, they measure wait times in a call queue but not the time spent navigating to the queue or listening to mandatory announcements. A customer may have invested two minutes of effort before getting to a wait queue.

What’s our alternative?

We look to measure effort or speed from the customers perspective. “Gate to gate time” is a better measure of the customers on plane perspective, but the customer experience also includes their time waiting to board or waiting to ‘de-plane”. For a claim process, the customer is interested in the time from notification of claim to payment of the claim and yet most companies measure sub components of the experience. Other effort measures include, contact per X, (e.g. contacts per order or contacts per application), as these indicate the frequency of unwanted contacts. There are many better indicators of effort for the customer than an average service level.

Adherence to schedule

A common measure in contact centres is agent ‘adherence to schedule’. This measures how accurately each agent follows their prescribed schedule for logging on, breaks, lunch and clocking off. It assumes that agents need to conform to this scheduling straight jacket if the call centre is to achieve their goals. In our experience it is usually a powerful workforce planning function that has insisted on this measure. Unfortunately, it causes many issues:

  • It makes staff stay in non-available states just before breaks to avoid taking calls that take them “out of adherence”

  • It means that team leaders or others have to

  • make many admin adjustments to the schedule to stop staff being penalised for non-adherence that was requested

It creates a sense of rigidity and inflexibility in contact centres and promotes the idea that the schedule is perfect and forecasts 100% accurate. The reality is that no forecast is that accurate, calls arrive randomly so the schedule needs to be flexible. If call demand doesn’t match the schedule then breaks and lunch may need to move. That means we need to change the schedule and not create the barrier of having to make many schedule amendments to make the measures work.

What’s our alternative?

We prefer “availability” as a measure rather than adherence to schedule. An availability percentage measures that staff have provided the hours that we expected from them. If someone arrives 10 minutes late and stays 10 minutes at the end of the day then they have made themselves available the required amount. Not having schedule adherence as a measure doesn’t mean that team leaders and a real-time function don’t monitor the schedule. Team leaders still need to manage behaviours like staff being late for work or taking far longer breaks than planned. They can still get that reported without adherence being a measure. In some centres, when we have dropped adherence, it’s been liberating for all concerned. Agents feel more trusted and less micro managed. Team leaders feel they can move breaks and active management contingencies are more available. We have even had agents cheer when we tell them that this hated measure is removed.

Conclusion

In Australia, various high profile reviews have identified that performance and culture are partly driven by measures and rewards. We are still amazed that so many businesses continue to use customer related measures that don’t work or could be improved. We hope this paper has provided some valuable alternatives and we are happy to provide more detail to any of these ideas that are of interest. Please get in touch if you think you would like more information by emailing to info@limebridge.com.au or calling 03 9499 3550. More details are at www.limebridge.com.au

Whitepaper Access

Please complete the following form to gain access to all our whitepapers

Please complete all required fields.

Submit

If you have already registered, this form will disappear in a few seconds

Whitepaper Access

Please complete the following form to gain access to all our whitepapers

Featured Posts
Recent Posts
Search By Tags
Contact us to discuss ideas in this White Paper
bottom of page