The platform we’ve built is centered around making agents highly effective and efficient while still empowering them to elevate the customer experience. All too often we see companies making painful tradeoffs between efficiency and quality. One of the most common ways this happens with digital / messaging interaction: The number of conversations agents handle at a time (concurrency) gets increased, but the agents aren’t given tools to handle those additional conversations.
In an effort to increase agent output, a relatively ‘easy’ lever to pull is raising the agent’s max concurrency from 2-3 chats to 5+ concurrent chats. However, in practice, making such a drastic change without the right safeguards in place can be counter productive. While agent productivity overall may be higher, it often comes at the expense of customer satisfaction and agent burnout, both of which can lead to churn over time.
This is largely explained by the volatility problem of handling concurrent customers. While there are definitely moments in time where handling 5+ chats concurrently can be manageable and even comfortable for the agent (e.g. because several customers are idle/ slow to respond) at other moments, all 5+ customers may demand attention for high-complexity concerns at exactly the same time. These spikes in demand overwhelm the agent and inevitably leave the customers frustrated by slower responses and resolution.
The ASAPP approach to increasing concurrency addresses volatility in several ways.
Partial automation to minimize agent effort
The ASAPP Customer Experience Performance (CXP) platform blunts the burden of demand spikes that can occur at higher concurrencies by layering in partial automation. Agents can launch auto-pilot functionality at numerous points in the conversation, engaging the system to manage repetitive tasks—such as updating a customer’s billing address and scheduling a technician visit—for the agent.
With a growing number of partial automation opportunities, the system can balance the agents workload by ensuring that at any given time, at least one or two of the agent’s assigned chats require little to no attention. In a recent case study, the introduction of a single partial automation use case increased the agent’s speed on concurrent chats by more than 20 seconds.
Considering factors like agent experience, complexity and urgency of issues they’re already handling, and customer responsiveness, the CXP platform can dynamically set concurrency levels.
Real time ranking to help focus the agent
Taking into account numerous factors, including customer wait time, sentiment, issue severity, and lifetime value, the platform can help rank the urgency level of each task on the agent’s plate and this alleviates the burden of trying to decide what to focus on next when agents are juggling a higher number of concurrent conversations.
Dynamic complexity calculator to balance agent workload
We reject the idea of a fixed ‘max slot’ number per agent. Instead, we’re building a more dynamic system that doesn’t treat all chats as equal occupancy. It constantly evaluates how much of an agent’s attention each chat requires, and dynamically adjusts concurrency level for that agent. That helps ensure that customers are well-attended while the agent is not overworked.
At certain points, five chats might feel overwhelming while at others, it can feel quite manageable. Many factors play a role, including the customer’s intent, the complexity of that intent, the agent’s experience, the customer’s sentiment, the types of tools required to resolve the issue, how close the issue is to resolution. These all get fed into a real-time occupancy model which dynamically manages the appropriate level of concurrency for each agent at any given time. This flexibility enables companies to drive efficiency in a way that keeps both customers and agents much happier.
While our team takes an experimental, research-driven approach by testing new features frequently, we are uncompromising in our effort to preserve the highest quality interaction for the customer and agent. In our experience, the only way to maintain this quality while increasing agent throughput is with the help of AI-driven automation and adaptive UX features.
As the product manager for the agent experience on the ASAPP platform, I get many inputs to my team’s roadmap. In shaping our roadmap as a Product, Design and Research team we try to balance cutting-edge experimentation and data-driven advancements with ideas that come directly from talking to and observing users. We find enormous value in doing what we call side-by-sides with agents.
Agents are power users—they are adept with all the ins-and-outs of the product, which makes them incredible sources of insight into potential areas for growth. On a recent site visit we sat side by side with agents to get feedback on recent product updates and observe ongoing workday pain points. While we were there our team noticed an interesting behavior in how agents were composing messages.
Quick typing led to many typos—which some agents took the time to correct while others did not.
Whenever agents chose to type a message freehand, they would try to type quickly, sometimes on old, sticky keyboards, which led to lots of typos. After a typo, different agents reacted differently. Some would ignore the typos and send the message as is, favoring a quick response over grammatical correctness. Others would finish typing their full response and then spend time right clicking their typos which were underlined in red by Chrome. Many agents would notice typos midway through typing, stop, and backspace in order to manually correct their errors.
My team saw an opportunity to improve quality and efficiency. We’ve all had the experience of trying to get a thought out, mistyping, and feeling interrupted by the consideration of whether to stop and correct it, or just keep going and come back later. For agents, this is happening all day long as they respond to customers, slowing down their response time and distracting from the content of the message they’re composing.
Looking deeper into the data around misspellings, the team observed that agents tend to make the same mistakes over and over again. By simply focusing on the couple thousand most common typos, we could address the vast majority of typo occurrences. Instead of making agents manually correct the typo by right clicking in Chrome or manually retyping, why not just correct it as they go?
As we built our autocorrect feature we wanted to make sure to not over-zealously correct. We all hate when the iPhone corrects a word we meant to type right before we send a message. The team set a high bar– we would only consider the feature successful if agents undid less than 1% of the typos we corrected. When we released this capability in an A/B test, the results were staggering. Not only was the undo rate far lower than 1%, but the impact to response time and overall handle time was substantial. This feature alone was able to cut average handle time by as much as 30 seconds, not to mention raise the quality bar of the responses that agents were sending to customers.
Sometimes the biggest wins come from small simple changes. Making sure our team takes the time to sit with agents often, side by side, ensures we keep a close pulse on what changes will really make an impact, however simple.