Groundhog-Day for the Bots - Why the new wave of technology must heed the lessons of the past
Teach the machines better or face the wrath of customers
It feels like “Groundhog-day” for many customers with the latest wave of sales and service technologies in the form of chat and speech bots producing customer hostility. A 2018 survey found that 51% of US customers disliked chatbots. A recent survey found that 49% of Australian customers would rather wait for five minutes to talk to someone than try and use a chatbot. It feels like history is repeating with yet another wave of technology that customers are learning to dislike. It’s clear that chatbots offer many benefits like continuous availability, low running costs and consistency of execution. It is therefore a concern that so many customers are already against these technologies and this paper explores reasons and possible solutions.
In contrast to bots, automated speech applications like Apple’s Siri, Amazon’s Alexa and Google Home seem to be growing in popularity.
They demonstrate the potential for automation and have set the benchmark for customers’ expectations on what is possible as have other apps and digital solutions. Why then would consumers be turning away from the latest automation technologies?
In tests we have performed, the latest wave of chat and speech bot implementations have had limited capabilities, frustrating ways of working and internal reports of success that didn’t match the customer experience. Problems we have observed include that:
Customers are forced to use bots even when they offer solutions for only a minority
Many of the chatbot dialogues are unhelpful, repetitive and irrelevant
Customers’ effort in the interaction isn’t measured properly
The bots isolate the customer from other interaction mechanisms – escape seems hard
The good news is that we think that improvement is possible and problems can be addressed. The solutions involve applying some lessons from the past and disciplines that have been used with all forms of customer interaction. In this paper we will explore five lessons that we think the bots (and their designers) need to re-learn:
The 80/20 rule and leaky funnels
The tricks of good conversations
Keep the bots honest with better measures of the experience
The bots need to be “team” players
The best support is still no support
The 80/20 Rule and Leaky Funnels
The business cases for automation through chat and speech bots seem compelling. A typical case suggests that the bot can answer 20% to 30% of enquiries, usually the simpler ones, and some may offer even better rates. Automating 20% to 30% of interactions offers significant saving in many companies and often makes a compelling case if no other customers are impacted. However, that can mask hidden costs. If a bot can handle 20% to 30% of enquiries this can mean that 70% to 80% of customers fail when they attempt to use the bot. Even 50% automation means 50% fail. Those who fail will have a negative experience and be frustrated. That may come at costs such as extended duration contacts in other channels, lost customers and brand damage.
Some organisations will not accept a solution in which the
majority of customers fail. For other less customer centric businesses, the temptation of removing a large number of “staffed” interactions may be compelling regardless of the impact on the broader customer base. Many business cases focus on the benefits of bot success rather than failure, so the hidden costs such as customer loss and frustration are not assessed. A complete business case covers both.
An alternative answer is to use what we call a leaky funnel approach for those who will fail in the automation. Ideally, customers for whom the bot won’t work, bypass the bot altogether. That’s often not practical so the next best solution is that they fall “out of the funnel” as early and as effortlessly as possible.
A bot “bypass” can occur if the bot is transparent in what it can and can’t do so customers can choose not to use it or exit fast. Once in the bot, it can be designed to recognise failure quickly and offer other options. A third protection is to set rules on how often customers fail before alternatives are provided rather than trying to “contain” customers in a bot and fail repeatedly. These three mechanisms can help minimise the impact of failure by getting customers to “leak out of the funnel” fast. Unfortunately, bot design tends to focus more on those for whom the automation works than those for whom it fails.
The tricks of good conversations
Many bot and AI providers say that their methodology involves learning from thousands of calls or chat strings. These methods assume that the AI will be able to figure out what good looks like from what occurs today and that somewhere in today’s calls or chats there is “best practice”. In our experience the best way to handle calls, chat and emails is often missing in organisations because agents have been poorly trained or have learnt bad habits. There may be no magic data pool of “best practices” to teach the bots and their designers.
Even the best agents in a company often need help to work out the best way. They may be the best in that business but that doesn’t mean they couldn’t be better and learn from external expertise. There are many principles and tricks to good service design that the best agents may not have figured out like, the best sequence of questions, and, how and when to ask multiple questions. Front line staff are also better than bots at filtering information and retaining context.
A proven way to design best practice for staff and bots is to design it using the collective experience of in-house staff and external experts. External designers bring new ideas to an organisation. They can apply proven design principles like the leaky funnel idea discussed above and can challenge the way processes work today. The front-line staff bring their knowledge of how things work today and what customers want. Bot designers can be added to the mix as they know the capabilities of the technology.
This workshop process takes three to four weeks of intense design and delivers a set of better practices for both front line staff and the bot. Both automation and the manned service channels benefit. Then the automation and analytics can go to work to help refine these ideas further and monitor their effectiveness. This process gets to a solution faster, delivers better practices but also recognises the differences between people and bots.
Keep the bots honest with better measures of the experience.
Many bot providers report apparent automation success that seems at odds with customer feedback on bot performance gathered from post interaction surveys. We saw an example where the bot reports stated success over 70% but the customer experience measurement showed negative customer sentiment to the bot. That was an example of misleading or confusing reports of bot performance. Problems include:
Confusion over what success looks like so that bot vendors count as success what the customer and organisation see as a failure
Confusing reports that the vendor understood but the clients don’t
Provision of very low “automation” success rates without explanation
Limited or no information on customer effort through the process
Measuring bots’ success isn’t easy because bot vendors have to use proxies for
success. For example, bot technology can measure that the bot provided an answer and the customer then exited the system. That sounds like success but can include failures like customers giving up. Bot reporting of success can therefore mask the worst failures where the bots have provided answers that totally miss the point. In another example the bot report counted connecting a customer to a live agent as success when the bot’s purpose was to prevent that.
A better answer is to look at both bot and non bot data to get the true picture. Bot reporting needs to be reconciled with volume data and causes of contact in other channels such as live chat or live calls to see what the bot is really achieving. That enables an organisation to validate what the bot thinks is happening. Asking customers if the bot correctly answered the query is another solution but puts customers to work and should be used sparingly.
A second problem is that customer effort isn’t always measured and reported in the bot experience. Getting to an answer is good but if it takes many iterations this will frustrate customers. The number of interactions and length of the dialogue are indicators of effort for customers. Examples of effort measures that bots should report include:
how many times did the customer have to provide information?
how many times did the customer redirect the conversation or repeat something?
how long did the customer spend in the conversation?
Understanding the customer’s effort in the bot solution will also provide insight on the real effectiveness of the conversation. Breaking this down by type of problem can also help fine tune the worst performing parts of the bot.
Bots as team players
Customers hate getting stuck in a bot. Despite all the hype about “omnichannel” solutions, many bots seemed to be designed with an intent to “contain” the user in the bot and prevent use of other channels. Some bot design white papers even see it as a mistake to advertise or link to other channels. No wonder customers are frustrated if bot designers don’t recognise their limitations and work against rather than with the other channels.
Bots and their designers need a little humility. They need to recognise what they can and can’t do and offer other channel solutions to queries they can’t handle or don’t understand. That can be a connection to a live chat agent or an offer to redirect to the phone or invite a call to the customer. Customers will have more confidence in bots knowing that they will be connected to a person if the bot can’t handle their query.
Bots will be better regarded if they act as a form of “level one” support for manned channels. Rather than seeing the bot as a self-contained channel, they need to be integrated with manned channels. Bot experiences will be better if they fail fast as we discussed above, connect to other channels quickly and supply appropriate context. If they pass over context like the question the bot couldn’t answer and where they were in the system, then customers will feel their time with the bot was time well spent because the agent will have that context.
The Best Support Is Still No Support
There is a risk that organisations see customers using a bot successfully as a win as no people were involved. That still may be a sub-optimal solution. Every time the customer uses a bot it probably indicates a problem elsewhere. It is an effort for customers to use the bot and we believe that “no one wakes up in the morning wanting to use a bot”. That was the idea we discussed in “The Best Service is No Service” (Josey Bass 2008) and it applies to bot use as much as other channels.
Every bot interaction is indicative of something else that customers don’t understand or isn’t working well enough. Bot providers should therefore report problems they are resolving and queries they process successfully as much as examples where they fail. Bot reporting should help a company understand what is and isn’t working for customers. Then the company can work to eliminate those root causes which is the ultimate opportunity for improvement as it eradicates customer effort.
In this paper we hope we have explained some of the pitfalls of bot design and deployments and offered some solutions that balance customer and company needs. We’re very excited about the potential for chatbots and automated speech and think they can play an important role in sales and service. We just want them to work well and offer positive experiences or else customers will be frustrated. As the bots improve, like all forms of self service, this will mean that the complexity of work for staff will increase.
That will mean changes to sales and service operating and skill models so that they can cope, but let’s get the bots working first!
We hope some of these ideas will help and are happy to explore them further.
For more information email us at email@example.com or call 03 9499 3550 or 0438 652 396.