top of page

AI May Be Harder Than It Looks

Learning lessons from past waves of automation to enable AI

 

 The latest explosion of AI solutions driven by Large Language Models or LLMs has huge potential. One recent article listed twenty-five different ways they could be used to drive improved customer experiences and efficiencies in customer facing operations.  We agree that the potential is there. However, as with each previous new technology wave, the vendors’ hype doesn’t always come clean on the work or information needed to make these solutions a success. There is also often limited discussion of the broader operating model implications to maximise the benefits. 

 

Each recent wave of customer facing technology has had implications broader than just greater use of self service. For example, the most recent wave of digital solutions has often removed a layer of simple interactions. This leaves a more complex mix for the manned interactions that remain. It also means that customers expect staff to know how the self-service works and creates additional knowledge needs.

 


The operating model required for a significant mix of complex interactions is very different from that needed for a full range of simple and complex. In one model you can use the simple interactions to train new starters, but in a model stacked with complexity, who you hire and how you train needs to be different.  (We predicted these impacts in “The Frictionless Organization”.) That is just one example of how these technologies can “reshape” operating models.

 

In this paper we’ll look at some AI related technologies that have been around for 5-6 years and what they can teach us about what will make this next wave a success.  The example we’ve chosen is based on the automated quality assessment solutions using AI that have been in place for five or more years although many organisations are yet to adopt them. We think the lessons learnt from that wave of solutions apply to the LLM wave.  We’ll cover three areas:

 

-        The building blocks that these solutions need

-        The potential to transform rather than just automate past ways of working

-        And the other indirect implications and broader operating model implications

 

The Building Blocks

 

Knowing what good looks like

The first wave of AI quality solutions were only as good as the processes they were asked to check against.  LLMs would appear to have the same characteristic: they can only supply information that exists or follow processes that are documented somewhere.  LLMs are amazing at assimilating existing information but they assimilate what exists.  The first wave of automated quality assessments could measure a process if the steps and conditions for those steps were clearly documented.  They could identify, for example that step X was being followed or certain words were or weren’t used.  However, if the organisation hadn’t documented when step X was to be used and when it should be proceeded by Step Y or Step Z, then the AI assessment couldn’t check the process.

 

Therefore, an essential building block for effective automated quality checking is to have defined “what good looks like” at the right level of detail. Ironically, the most common gap we find in organisations is this “definition of good”.  We’re not talking here about work instructions that explain how to complete a screen field by field.  We’re talking about process guidance that explains the best sequence of the process, the way to take key decisions like whether to charge a fee or promote a product and the best sequence for an interaction. It’s hard to get this level of process definition right and to combine information and decision supports that exists in different forms. The first wave of automated quality solutions were often unable to score against something with this level of certainty and therefore the assessments weren’t trusted or as useful as they could have been. Because the process definitions didn’t exist at the right level, the automation wasn’t able to score accurately. 

 

Inside out processes (rather than outside in)





A second barrier to this kind of automation is when systems and processes actually force a poor process.  For example, we have had multiple clients where the sales process was dictated by the needs of marketing or underwriting and forced the customer through a complex process that asked for too much information up front and made it cumbersome for the customer.  Many online digital applications still do that today forcing customers to input field after field when the customer’s need is to get to a quotation fast. Automating assessment or execution of a bad process just embeds these bad traits.  The value that an organisation will get from automation is limited if the underlying process is ineffective.

 

Both these issues illustrate the need to design a process prior to any kind of automation.  The current wave of automation via AI is no different. The information and processes need to be well documented and thought through before the AI can go to work.  Automated quality assessment of ill-defined processes will produce highly variable scoring that doesn’t align to the experience the organisation wants to deliver.  If the automated assessment scores against a poor process it will just assess alignment to a bad process. The most successful automation projects that we’ve seen, rethink processes first and AI is no different.

 

Transforming the process (rather than just automating)


 The automated quality solutions using AI offered the potential to “transform” the way this works not merely automate it.  We think they can pull at least three additional levers: volume, speed, and scope. Let’s explore each.

 

Volume:

A.I. based solutions in the quality area can check nearly all interactions. That is an amazing step up from sampling approaches that often only include 1-2% at most. For the first time, the quality system can be statistically valid and provide team leaders and coaches with continuous and nearly complete feedback on front line performance. This allows for more analysis and insights on where staff need training or coaching. These complete samples can show systemic issues and gaps across the whole workforce and be used to assess training and process gaps.  They can also provide a complete picture of any compliance issues or customers where “repair” may be needed.

 

Speed:

For some years, AI solutions have been available to assess interactions in near real time. This is a game changer for compliance monitoring.  Rather than flag issues “after the fact”, front line staff can be warned and managers notified in real time of compliance breaches that need immediate remediation.  In financial services, some banks have deployed real time “analytics” to monitor every interaction in near real time and to spot issues like incorrect or invalid advice.  The same technologies can also be used to connect front line staff to the right process and provide process guidance with the AI searching knowledge tools and web sites to support the agent. In effect this moves an organisation from “reactive” quality checking to a more proactive monitoring.

 

 

Scope:

 



 The AI tools for quality have capacity to analyse and assess many things that weren’t possible with manual checks limited by time and resources.  For example, the tools can get other information such as levels of repeat contact and assess resolution and outcomes from interactions. Separate AI tools are being deployed to provide insights on causes of interaction, but the AI quality tools can also be scoped to provide this.  If the tools are sampling all interactions, be they calls or messaging, they have great potential to provide these broader insights.  We have always advocated that quality measurement should provide insights on repeat work and call drivers even when sampling by humans is being used and these AI tools can make this a reality. 

 

Now that also doesn’t come without effort. Training a bot to assess the reasons for the interaction takes time and effort (which is why separate AI solutions are emerging).  One attempt at this process listed 600 “reasons for call” in the first attempts by the bot.  It needed to be trained that “where’s my bill” and “I haven’t received my bill” are the same intent as are about another 40 variations on the same theme.  The bots can get these extra insights if trained to do so.  We even had one client who got agents to state the call reasons on calls so that the automated monitoring could pick up these “voice tags” to train the machine.


Operating Model Implications

 

There are several Operating Model implications of using an AI solution for something like Quality monitoring.  With 100% monitoring and the needs to “train the machine” the “quality monitoring” team are not needed as “checkers” anymore. However, they may have a role to play to fine tune the assessments and provide greater insights and analysis. The volume of information and potential use also impacts the way team leaders and coaches work with quality information. They need to learn how to use this volume of information and where to focus their efforts.  In the pre-AI world, team leaders might be informed of an average quality score and be asked to check one or two “quality fails”. In the AI enabled world they have to learn which information to act on because it is a complete view of performance.

 

There are also some other “substitution” impacts.  Automated quality solutions sampling every call can also be trained to assess customer sentiment and aspects of the customer experience.  This can trigger a re-assessment of the purpose and need for post interaction surveys. It changes both the purpose of the surveys and in some instances could change the sampling approach. For example, it could trigger the need for feedback in selected situations or ask for a different type of feedback in other scenarios.  Real time quality assessment could also trigger surveys in some instances such as when a customer expresses dissatisfaction.  Again, the organisation can start to be proactive and pick their moments to request customer feedback.  

 

The other operating model implication is for those that now have access to this data and what it means for measures and scorecards.  The volume of quality data can make this a more important measure for individuals and teams. Where small samples created manually, meant taking note of any problems or exceptions that were found, the AI driven approach means that team leaders can look for trends on many aspects of the interaction.  The technology enables managers to search for particular wording or patterns of behaviour.  It also means that management can look for trends across an entire operation. Therefore, management and coaching processes need adapting to exploit the AI enabled data.

 

Summary

In this paper, we hope we have explained that unlocking the potential of AI is not a free kick.  AI is not a miracle cure if processes are poor or aren’t documented well. There is also great potential to work differently rather than just automate the old way of working.  There are some amazing tools and tricks available and far more detail than we could include here. If you would like to discuss this further, please feel free to get in touch at info@limebridge.com.au or call 03 9499 3550 or 0438 652 396.