Modern CX orgs need to think about agent efficiency in a modern way
When talking to Contact Center leaders, we constantly hear the refrain, “How do I bring down handle times?” As long as contact centers have been around, the standard measure of agent efficiency has been how quickly they can handle a customer’s inquiry and move on to the next customer. In the phone world, the simplest way to improve the output of a phone agent is to shorten their call time. Thus handle time (or AHT) has long been our key metric.
With the onset of digital channels, and increasing adoption of these channels by both customers and companies, handle time is no longer an adequate measure of agent efficiency.
As a Success Manager at ASAPP, I work with our customers to drive value and optimize contact center operations for digital and voice programs. A key part of our effort is identifying the right metrics for analysis to measure program success. We are continuously evolving what we measure and the recommendations made based on changes in the way companies interact with customers (over multiple channels and sometimes asynchronously, for example.) And, on the specific goals of the individual company.
In order to best evaluate program performance, particularly agent efficiency, we need to look beyond AHT for three key reasons:
- Digital automation is changing the average conversation that gets to a live agent. The simplest customer inquiries are able to be solved by automation, and even those issues that used to be too complex for automation can often be solved through integrations to company systems. For example, an internet service customer can check if they are in an outage, troubleshoot their modem, send diagnostic information, AND schedule a service appointment, without ever speaking to a live agent. This expansion of automation causes only the most complex issues to get to agents, driving up the average time it takes to handle those inquiries.
- Digital agents may have a more expansive set of tools than ever before. It’s not just about being able to handle more than one chat at a time, digital agents can rely on automation and augmentation to maximize the number of customers they handle at once.
- Voice and digital conversations just aren’t the same, and we need to identify a way to benchmark them at a programmatic level. The handle time of a voice call is all of the time the agent and the customer are spending on that one call, while the handle time for a digital conversation is all of the time the customer is on that conversation. Digital agents are likely to be handling multiple conversations concurrently. And, a digital conversation may stop and start over multiple sessions, as customers handle their inquiry asynchronously.
Customer Experience vs. Agent Efficiency
This isn’t to say that AHT is irrelevant, in fact it is still very relevant, but not in regards to agent performance. AHT is most relevant for the customer experience. Companies must still be concerned with how long the experience is for a customer to handle their issue. What I have started pushing for is the shift in perspective, AHT is a KPI for the customer experience, but when evaluating front line agent efficiency/output, we have better measures.
From AHT to Throughput
In order to find the best opportunities to maximize workforce efficiency, modern CX teams have shifted focus from AHT to Throughput. Throughput can be measured a number of different ways, but simply it is a measure of an agent’s output in a certain time period, normally an hour.
Throughput measures an agent’s output in a way that works for both voice and digital engagement, including asynchronous interactions.
In most cases, organizations are measuring this as resolutions/assignments per utilized/logged-in hour. This measure can easily be translated into a simplified cost per chat metric and overall, gives a holistic picture of how much can one front line team member do. Throughput also helps to avoid the biases of handle time, because it can be based off of total time that an agent is working, potentially highlighting absenteeism or other addressable gaps.
Take the below example of a way that we are seeing organizations start to shift their thinking:
Agent A: On the clock for 5 hours, handles 20 customer issues, Average Handle Time = 20 minutes.
Agent B: On the clock for 5 hours, handles 25 customer issues, Average Handle Time = 20 minutes
Assuming both agents are working the same shift, we would obviously want Agent B’s performance over Agent A’s. Agent B will handle an assignment more per hour than Agent A, while the customer experiences the same handle time. Combining this analysis with Occupancy and Customer Satisfaction enable an organization to get a complete picture of the body of work of an agent. Throughput becomes the measure to assess efficiency. Those agents that can handle more customers per hour, while staying within CSAT and business (occupancy) targets, are the top performers. We can use their behaviors as a template to better train and support the rest of the agent base.
Where do we go from here?
Technology advancements are continuing to push the contact center industry forward. Automation is becoming easier to build and integrate with company systems. Only the most complex customer issues will need live support. Artificial Intelligence is getting smarter and more helpful in supporting live agent conversations. The lines between live agent and automation blur as agents are supported by recommendations, automated flows, and more. Throughput will be a metric that can scale with the changing landscape, better than any other measure of agent efficiency.
Efficiency at 30,000ft—Organizational Throughput
Even still, some forward-thinking organizations are looking beyond agent throughput to a broader program efficiency view. The question we are starting to hear is “How efficient is my Customer Experience program?” Companies are leaning into throughput, but viewing it in a programmatic lens. When you combine both the automated and live aspects of your program, how effective is your investment? Program effectiveness is being measured when looking at all automated AND handled customers per staffed agent.
Organizational Throughput is helping to view the program as a whole, tracking improvements or degradations in automation and live support on the same playing field. As the worlds of automation and live support become more intertwined, it only makes sense for organizations to start looking at these two separate entities’ performance together.