Lifting the Carpet on How the Work is Done
When customer outcomes don’t match internal reporting
Sometimes reporting can tell too good a story. In a range of examples, organisations report service levels being met and turnarounds for customers being exceeded month after month. “Nothing to see here" appears to be the message. In one example, over 80% of work was being completed within SLA and error rates were running at 0.2%. So, on the surface there appeared to be no problems and no need for improvement. In another situation, every area of a mortgage processing team was delivering to service levels month after month. In a third scenario, an offshore operation reported every task was on track and completed on time. Yet, management in all these cases suspected something was wrong.
In these instances, the leaders of the operation had contradictory data. The customer behaviour and some forms of feedback didn’t match the reported stats. Customers often complained that key processes took too long. Third parties in the industry rated one company’s service as one of the worst and claimed service was really slow, despite every Service Level reporting “green” status. In another business, customers were leaving faster than joining and the complaint feedback and social media reputation was that some of the processes were the slowest in the industry. Yet all ”reported” service level measures were being met. There was sufficient concern in all these examples that management decided to “lift the carpet”. They brought in an independent team to seek an explanation of these divergent stats: hitting targets on the one hand but frustrating customers on the other.
This amalgam of case studies looks at how organisations were able to get to the heart of these issues and then challenged their internal perspective on how well their business worked. It will focus on three techniques that led to improved customer outcomes:
a) Lifting the carpet - using different data and observation techniques to understand the reality
b) Viewing the customer experience from the outside in
c) Questioning structure and process to rethink how work is delivered
1. Carpet Lifting Mechanisms
In these situations where management data seems to say everything is fine, lifting the carpet means getting into the data differently by unpacking averages and looking at everything “bottom up”. The techniques used to unpack the problem meant the analysis team looked at the source data to understand the detail rather than the simplified averages.
In all these examples, the averages showed service levels were being met. The source data showed what this meant. In one instance, this showed that service levels were measured on hundreds and hundreds of different work items. Each one was a sub process, and they were all within their time standard. However, many of these sub processes were needed end to end to complete something meaningful for customers. Each task was being done “individually” in the prescribed time, but the customer was measuring the process on the collective time for ten or fifteen different tasks “added together”. Through the customers eyes, the process was taking far too long, but no one was measuring that end-to-end time just the “touch” that they owned. The work would pass from queue to queue and sometimes back to a centralised management team to re-allocate to yet another queue and each one was hitting a time standard.
The same was true in the “balkanised” banking model where each team measured success on their part of the process when the customer only cared about the end-to-end time across the whole process. The combined or “end-to-end time” wasn’t reported anywhere in the business, which was strange as it was all the customer cared about. These examples illustrate that the mismatch came from a focus on internal measures and time standards that lost sight of the customer.
To understand the process in even more depth, the analysis team used another bottom-up technique. They stapled themselves to a request from a customer and followed it through the maze of sub-processes. This showed how many steps a request had to go through, how often each request waited and how many people were involved in managing this complexity. In one example the team discovered that 20% of the people in the process were there just to “allocate the work” to the right place. In that example, complexity had created more work and created internal “demand” to manage the work.
Not only was the complexity of “work design” slowing it down, it was taking more effort to manage. Some of the work “breakdown” was there to enable an offshore team to complete parts of the process. In theory that was cheaper, but with work criss-crossing the globe to enable this model and a 20% overhead to manage it, the net impact wasn’t that cost effective.
Lifting the carpet in all these examples exposed the need to rethink how the work was managed internally in order to fix it for the customer. The service levels were being achieved but in such complex and convoluted process designs, that the processes took longer and needed more management. Tasks had been added just to manage the sub tasks.
2. Outside in process design
Following the work around and across processes also uncovered other startling issues. In one organisation the processing team was a very quiet place. Sitting in the team you could hear a pin drop. That might have been interpreted as “a dedicated team, getting their heads down”. However, the stats said that this team rejected over 30% of customer requests because they were incomplete. The silence was problematic, because the procedures said that the team would call the customer to fix up errors. The silence was deafening. The team were rejecting items, but none of them were taking time out to call the customer. They just sent the work back to the customer with a letter telling the customer to fix it. That was less effort for them, as well as the extra result of giving them a “credit” on the productivity system. If they called the customer (as per the procedure), it took them longer and their productivity looked worse. Customer errors resulted in extensive delays and often resulted in inbound phone calls as customers asked, “Where is my X?” because rejection letters were stuck in the mail. No one was really monitoring the key issue, the customer incompletion rate.
In a different organisation, a processing team also rejected items at the first opportunity. An email would be sent as soon as an error was found, and the task would be closed (within service level of course). However, when an item was rejected, the processor didn’t check the remainder of the customer request for other errors and issues. This meant that some customers experienced multiple rejections and were frustrated that items would return multiple times. We have seen this ”trick” of closing incomplete cases back to the customer in many places - it's hard to pick up as each iteration of the frustrated customer request is treated as a new event. All the reporting looks great.
Both these situations highlighted quick win opportunities in following a process that made sense for the customer. In the first instance, customers were called to try and fix any issues in near real time rather than via snail mail. In the second situation, when errors were found, the entire application was checked to ensure all issues were “caught” before it went back to the customer. Both these changes were obvious fixes for the customer but also saved effort for the company in touching the item less and getting to resolution faster. They produced considerable productivity savings and a better outcome for the customer.
3. Questioning everything in structure and process
Two of the examples above had ended up with overly complex structures. The work had been “decomposed” into tasks to create specialisation and in some cases, to use offshore teams. For example, when one utility offshored their work, they set up 80 different processing teams each of which were trained in one sub process. No-one was trained in the end-to-end process of billing or payment and as a result the process became fractured, often fell over and always took longer as work items bounced around the queues between the teams. Exceptions also took ages to fix as no single team understood how to fix exceptions that fell outside their responsibility. They didn’t even know which team to send a problem to, so the errors got stuck or lost resulting in repeat requests, duplicated work items and general chaos.
Design teams have been successful in “unpicking” this form of structural process decomposition by getting back to basics. Two techniques work well. The first technique involved reconnecting the process for the customer. This technique assesses the benefits of some processes being owned “end to end”. This allows staff to take ownership of outcomes and obtain satisfaction by seeing each customer request through to completion.
That doesn’t mean that some complex processes don’t benefit from some form of break down and sub specialisation. A process like an insurance claim may need specialist skills in claim assessment or item valuation. However, long running claims may also benefit from some kind of claim management that takes responsibility for joining up specialised departments and owns the process for the customer.
A second technique that can deliver benefit is the questioning of each hand-off. We call it the “three-year-old child test”. Toddlers of that age often ask “why?” and this technique does the same. Questioning each hand-off forces evaluation of the overall benefit. In the case of one company where processes had been broken down into thousands of sub tasks, there were clear benefits in “stitching” the process back together. This removed allocation steps, took out “re-tooling” costs at each sub task (reading information and getting familiar with that case), and provided a faster process for the customer. The same technique was used to question if there was a net financial benefit from the offshored tasks within the process. The tasks were performed at a lower cost offshore, but the cost of hand off and overheads in managing the work offshore meant that there was no net financial benefit. Using these techniques joined tasks into more logical ways to do the work. This reduced management overhead, but most importantly sped the process end to end for the customer.
In all these examples, none of the improvements involved the need for new technology or installing new systems. They did involve using the existing systems better. If anything, some of these changes wound back things the systems could do (for example complex routing), to focus on a design that made sense for the business and customer. The changes meant less complexity, reported in more meaningful ways, that was easier to manage. The systems enabled all that but had been reconfigured to work differently.
Summary
In this paper, we hope we have explained the benefit of “lifting the carpet” to evaluate processes and work management in new ways. If you have these kind of data mismatches where process metrics are all on track, but customers aren’t happy, perhaps you need to apply similar techniques. There are some amazing tools and tricks available and far more detail than we could include here. If you would like to discuss this further, please feel free to get in touch at info@limebridge.com.au or call 03 9499 3550 or 0438 652 396.
Comments