Generating New Customer Intelligence
Contact centers are goldmines of market information – from addressing customer issues and gauging their wants and needs, to seeing how they rate you in comparison to your competitors, and more. Customer interactions contain valuable information to improve current products and can provide early warning signals for any potential issues or emerging competition.
Despite the valuable information available, it has been incredibly difficult to access: deciphering phone audio or messaging transcripts is an arduous task. Text analysis has provided some assistance, but more often than not the optimal solution has been to ask the agent or customer on the call to fill out a survey with the data we care about.
Agents can be extremely effective at filling out these surveys, yet at a cost: adding questions is very expensive, and you are only able to acquire future data. Analyzing historical trends is still an onerous task. Requesting feedback from customers proves more difficult as well; sampling bias becomes an issue while other obstacles may occur. Leveraging quality management data for insights quickly runs into sparsity issues, making proactive responses too slow.
At ASAPP, we’ve incorporated Large Language Models (LLMs) to solve this problem for years as part of our Structured AutoSummary product. LLMs are great at understanding the meaning of the text. We can use them to regularize the recording of the interaction. We can represent conversations as a free text summary, and we can pull structured data out of conversations.
Newer LLMs can also perform a facsimile of reasoning. GPT4 and other models can be great at answering questions that require combining pieces of information in a call transcript. That extends the number of questions we can answer with high confidence – and the amount of structured data we can extract from conversations.
Structured data remains essential. Although LLMs can be very good at analyzing a single conversation, it takes a different approach to analyze hundreds or millions of customer interactions. Traditional analytics approaches – e.g. BI tools, Excel, ML models, etc – are the best way to analyze, identify patterns, and understand trends across a large amount of data. Now we can expose customer interaction data in a way those analytical tools understand.
Certainly, there are some complications in relying on AI to convert unstructured conversations to a usable structured format. At ASAPP, we’ve devoted substantial effort to managing hallucinations and reliable data collection by building in dedicated feedback loops and having multiple models working together that tackle different aspects of hallucinations.
Not surprisingly, the quality of data matters too. We’ve benchmarked the quality of our AutoSummary outputs against ASR accuracy (1-WER), and we see that highly accurate transcripts (where our own generative end-to-end ASR system AutoTranscribe sits in the mix) produce materially higher quality data on downstream tasks like extracting structured data out of conversations.
Turning unstructured audio and text into structured data unlocks a wealth of data stored in contact center records. Utilizing existing analysis tools and approaches can make contact center data available to other departments, like Research and Development, Marketing, and Finance, in real-time without purchasing additional IT capabilities for analysis and visualization.
For agents, this provides a massive boost in efficiency, getting quick answers to business questions and freeing up more time to help customers. For customers, it’s even more dramatic. They get answers faster and have a much smoother experience overall – all without survey bias.
LLMs are fantastic tools for language: writing poetry, essays, and code. They are also great at turning natural language into structured data, blurring or eliminating the boundary between “structured data” and “unstructured data.” Leveraging that data, and making it available to all the existing business processes, is where we’re heading with Structured AutoSummary.