The Resolution Holy Grail

The First Contact Resolution Conundrum


When a customer picks up the phone, starts a chat, sends an email or heads to a branch, they do so for a reason. According to most customer research, the customer’s number one goal is to resolve their problem or complete their transaction. First Contact Resolution (FCR) is the priority for customers because they want their problem to be sorted. By First Contact Resolution, we mean that in one interaction the customer gets what they need and they don’t need a second contact. (See note below on the difference between First Contact Resolution and First Point Resolution (FPR)).


When things do not get sorted or resolved on first contact then customers repeat the contact or try a different channel. The second and third attempts by a customer take longer and they are even more determined to get resolution. Second contact handle time is on average 30-50% higher than on first

contact (see example shown from a typical LimeBridge study) and therefore lack of resolution drives disproportionately more work. After multiple resolution failures customers either give up, leave the business, or escalate to a complaint body. Lack of resolution drives certain types of failure demand and increased cost.


There is little dispute that resolution really matters to the customer and the business. The conundrum is that few organisations can measure it well or measure its unwelcome sibling: repeat contact. Some companies claim they can measure FCR but often it’s an approximation or best guess which this paper will explore. You may think that you have a good handle on resolution, but this paper may make you question that. Even where organisations have some form of overall measure of FCR, few organisations can measure it accurately at the process or front-line agent level. Even if it is measured at a summary level it is rarely used to manage individual performance and drive improvement. If we are right that most organisations cannot measure resolution and repeat contact accurately or in detail, then we can also be confident that those outcomes are not being managed well.


This paper will explore:

  • methods used to measure resolution and repeats and whether they are accurate or effective

  • alternative measurement and management mechanisms that could be more effective

In our next paper we’ll look at what causes low resolution and ways to address that.

1. Measuring Resolution


There are two common mechanisms that attempt to measure resolution: post contact surveys and measuring a repeat contact in a limited period (also known as presumed repeats). In a recent US survey of contact centres, these two mechanisms accounted for the 60% of those who said they could measure FCR and the other 40% admitted they couldn’t. Unfortunately, these two mechanisms have significant flaws.


Post contact surveys ask the customer ‘did we resolve your enquiry’ with a Yes/No type answer. That is less effective than it looks because often customers don’t know. An agent may have said "I'll get that done for you", but often that involve a request to another area that may or may not be effective or timely. Often customers are sent away to take action and therefore do not know if their problem is sorted until they pursue those actions. At best, asking the custome provides an "impression of resolution".

Few of these surveys give the customer the option of a "maybe” or “I’m not sure yet” type answer. The last problem with these surveys is that agents often “load the dice”. Many of the post contact surveys need the customer to hold on the line to take the survey. The agents often promote the survey and many of them avoid that in situations where they know they have not sorted the problem. These samples contain a bias to those who had problems sorted out and therefore a false impression of resolution success. In some studies we’ve found organisations think they have 70-80% resolution when a more detailed assessment using more precise mechanisms shows it tracking at 50-60%.


The second mechanism that is often used for FCR measurement is to infer it from an estimate of repeat contact. The way this works is that the company counts it as a repeat contact if customers make contact again, usually in the same channel, within a limited time window (e.g., two calls in seven days). That sounds logical but measuring repeat contact this way is also a guesstimate. One Telco did this by assuming that any customer that called or emailed twice in seven days was a repeat. It subtracted this rate of repeats from all calls to estimate a resolution rate. There were lots of flaws in this approach such as:

  • It did not cover cross channel contact, so a chat followed by an email or an email followed by a call did not count as a repeat

  • The seven-day window did not work for contact types where the customer might not call for seven days. For example, they might not understand the outcome until they saw the next bill weeks later, tried to use the same service or next tried to pay. That could be weeks away so would not get trapped by a seven day repeat window.

  • If a customer called about different issues, the calculation did not know that these were non-related problems and assumed they were repeats when the issues were unrelated.

These two mechanisms, surveys and approximated repeat rates, are used to measure FCR but neither is that accurate. Many organisations acknowledge this lack of accuracy and therefore don’t do much with this data. It’s not really used to call out issues or drive improvement.


2. Better Measurement Mechanisms – low and high tech


There are measurement mechanisms that work but we see them used less than the inaccurate mechanisms of surveys and “guesstimated” repeats. We’ll describe a low tech and high-tech approach.


2.1 Low tech measurement

A simple “low tech” mechanism to measure resolution and repeats is to build this into quality sampling and other contact observation measurement e.g., team leader observations. In most contact centres calls and emails are sampled by a quality team or supervisors. if someone is going to take the time to assess how a call, chat or email was handled, why not also assess:

  1. If it was a repeat contact?

  2. What outcomes were achieved and thereby track resolution?

That form of sampling will provide a very good indication of the levels of repeat contact and resolution overall.

It can be an objective opinion based on agreed definitions of what is resolved and what isn’t. For example, if a quality team samples 1000 contacts a month, that should be a good indication of resolution and repeats for that operation, assuming it is a random sample. In our diagnostic analysis work we classify whether customers say that this is a repeat contact at the start of a contact and categorise “outcomes” at the end of the contact. This provides two measures of resolution that can be correlated (but it is a slight underestimate of repeat contacts because customers do not always mention it). This kind of sample can show which channels are involved in prior contact (see chart of repeat contacts) and break down repeats by contact driver.

An assessor of the contact can also analyse the outcome at the end of the contact. They can hear what actions are agreed or whether resolution is achieved or a transaction completed. This produces analysis of resolution and explains the non-resolution outcomes like “customer to act” or “task sent to department X:” This produces a summary of resolution and all the diverse ways in which “non-resolution” occur so starts to paint a picture of things that need to be tackled to increase resolution (see Resolution and Outcomes of Inbound calls chart).




This low-tech approach samples the contact “at both ends”. It notices customers saying it’s a repeat contact and analyses the resolution and non-resolution on completion. It provides a reasonably accurate “macro view” of repeats and shows situations where resolution is not achieved. It may not be a big enough sample to assess employees, but it can be broken down by contact type and channel and enable further analysis. It is certainly accurate enough to provide an indication of whether an organisation is getting better or worse and can show barriers to resolution. For example, a recent analysis showed that in 20% of cases the customers were sent away to act and in a further 25%, the front-line staff had to make requests of other departments. In a further 3% there was no clear resolution or action at the end of the call. So, in 48% of contacts the call was not resolved. That correlated with a repeat contact rate of over 25%.


2.2 High tech measurement


The analytics tools available today can be more accurate in assessing repeats and resolution.

Speech and text analytic can be configured to track the drivers of contact using a defined set of reasons across channels. This analysis can then look for the same contact driver e.g., where’s my package?” “repeating” for a customer within any defined period e.g., customer calls to say “my XYZ isn’t working” twice in seven days. The analysis can be made even more sophisticated by measuring related contact reasons. So, emails of type A might “spawn” calls of type B, C and D. When these other contacts occur, they have been created by lack of resolution of the first contact. For example, customers might call to say, “where’s my package” and call back a few days later to cancel the order or demand a refund. The analytics tools can look for the same reasons repeating and look for the related reasons that occur that indicate non-resolution. This analysis can look across channels and see related contacts in email, chat, and calls.


Analytics can be made even more sophisticated by varying the period of time used to define a repeat contact and having a tailored period relevant to contact reason. Some repeats may be measured in days, some in weeks while others might be measured over a complete billing cycle measured in months. The analytics can cope with a range of different measurement periods. These sophisticated techniques can create a much more accurate view of repeats and therefore can infer resolution more accurately. As they can be used to measure large volumes of data (often all contacts in all channels) they can provide continuous reporting and allow drill downs at the level of:

  • Agent performance

  • How different channels perform

  • How each process is performing.

This is much more insightful when trying to take actions. With this kind of measurement an organisation can spot agents who are not resolving well and processes and or channels that have low resolution.


One organisation got so sophisticated at this that they could track each agent on the repeats they generated versus those they resolved and thereby assess agent performance (see chart)



Another organisation was able to track contact reasons and then used this stye of analytics to be able to report the repeat rate against each of these reasons. This produced a chart like that shown below:



That created visibility of the contact drivers that had low rates of resolution and also showed how the resolution rates were trending. Now this organisation had data that it could use to analyse which processes needed investigation and investment. It even produced another view that looked at NPS scores by contact reasons and showed the NPS impact of repeats. This provided visibility of priorities to lift customer satisfaction and loyalty.


Summary


There is nothing customers like to hear more than a front-line person saying, “yes I can do that for you now”. In our experience it is often a combination of factors that limits resolution. It’s only when the real levels of resolution and repeat work are exposed that flaws become apparent and the true costs of non-resolution emerge. It’s worth investing in finding effective ways to measure resolution and repeats in the more accurate ways we have described because it is so important to the customer and improvement opportunities become clearer. Look out for our next paper we’ll explore how to improve resolution. If you would like to discuss these ideas further, please feel free to get in touch at info@limebridge.com.au or call 03 9499 3550 or 0438 652 396.

Note: First Contact Resolution (FCR) v First Point Resolution (FPR)


First Contact Resolutions means that in this “interaction as a whole” the problem is resolved.

First Point Resolution (FPR) means the first person they dealt with sorts it out. Many operating models aim for FCR rather than FPR. For example, branch models with a “meeter/greeter” transfer the customer between staff to a achieve FCR not FPR. AAMI’s human switchboard does something similar in the call centre space; the first person does not even try and resolve the problem; they get you to the team that should be able to help.

Whitepaper Access

Please complete the following form to gain access to all our whitepapers

Please complete all required fields.

Submit

If you have already registered, this form will disappear in a few seconds

Whitepaper Access

Please complete the following form to gain access to all our whitepapers

Featured Posts
Recent Posts
Search By Tags
Contact us to discuss ideas in this White Paper