Stefani Barbero

Stefani Barbero is a marketing content writer at ASAPP. She has spent years writing about technical topics, often for a non-technical audience. Prior to joining ASAPP, she brought her content creation skills to a wide range of roles, from marketing to training and user documentation.
Raising the bar: AI agent safety and security in financial services
The stakes are higher than ever—here’s what to demand from your AI agent
When an AI agent makes a mistake in scheduling a dental appointment, explaining a retailer’s return policies, or upgrading a customer to a premium mobile service plan, it’s frustrating for the customer and the business. But the damage is typically small.
For banks, investment firms, and other financial institutions, the stakes are higher. It all comes down to trust, and losing a customer’s trust is just one of the risks of using an AI agent that lacks sufficient safety and security measures. There could also be legal and compliance ramifications, not to mention a financial hit.
When the stakes are higher, the standard for AI agent safety and security must be higher, too. What does a higher standard for safety and security with an AI agent look like? It’s a critical question you need to answer before launching a generative AI agent to talk to your customers.
Getting beyond AI safety basics
Since the first AI agents for customer service hit the market, many solution providers have steadily improved their safety. They’ve added guardrails to keep the agent on task and within scope, mechanisms to prevent jailbreaking, and QA models to evaluate responses before they reach customers.
That’s great, but it’s just the bare minimum of what you should expect from a credible vendor with a reliable AI agent solution. You’ll need to raise the bar for AI safety. The level of trust required for an AI agent in the financial services industry is impossible to achieve with typical safety mechanisms alone. Your team must also be empowered to monitor performance, measure impact, and fine-tune your AI agent quickly.
What you really need is a vendor who also enables your visibility, oversight, and control of your AI agent.
Visibility into your AI agent’s performance
You can’t manage what you can’t measure. And you can’t measure what you can’t even see. So solutions that don’t provide sufficient visibility into how the AI agent is performing fall short when it comes to safety and security. Look for a solution with these critical capabilities:
A clear record of the AI’s actions and reasoning
A generative AI agent that’s committed to safety and security should create an audit trail for every conversation. This record should document every utterance and action it performed, as well as its reasoning throughout the interaction. The AI’s reasoning is especially important for understanding not just what it did, but why. That’s the kind of information that makes or breaks a root cause analysis when you’re investigating inconsistencies or performance issues.
Robust performance reporting and analytics
To generate useful insights into your AI agent’s performance, you’ll need to see the big picture, and dig into the details. So, you’ll need a solution with robust performance dashboards and automated reporting to identify patterns in your metrics, prioritize improvements, and track quality over time. To extend your visibility, you’ll also need the ability to extract custom data feeds for additional analysis.
Real-time monitoring of every customer conversation
Analyzing the previous week’s metrics can help you spot performance trends. But it’s even better to identify issues in real time—before they become bigger problems that affect your customers and your business. Look for an AI agent that includes automatic monitoring of every conversation to flag suspected inconsistencies for your review. Immediate alerts for issues with high-impact potential will equip your team to respond quickly and stop a brewing problem before it gets any bigger.
Flexible human oversight and collaboration
Every AI solution vendor acknowledges the importance of keeping a human in the loop. But not every vendor has built a solution that enables true human-AI collaboration for live operations. For many vendors, the active role of a human in the loop ends after the initial testing and optimization. After that, humans are no more than escalation points for when the AI fails.
That’s a missed opportunity for you—and a serious gap in AI safety. Look for a vendor who has designed and built an AI agent with human-AI collaboration in mind. Look for these advanced capabilities:
Collaboration, not just escalation
Some solution providers tout their AI agent’s ability to hand off a call or chat with enough context to help the human agent step in seamlessly. But even a smooth transfer dings your containment rate. And it’s a sign of how much the AI can’t handle safely.
Instead, look for an AI agent that can consult a human when for information, guidance, or to perform a task in a system it cannot access – without transferring the customer. This type of AI agent simply asks a human for help when it hits a roadblock, then continues serving the customer once it has what it needs. That expands the use cases you can automate safely.
You should also have the option to require the AI agent to ask a human for approval to perform certain high-stakes actions by policy. You might not want your AI agent to independently make a decision about crediting a customer’s account when there’s a disputed transaction. But if the AI agent can request approval from a human agent, you’ll still get the benefits of automation without sacrificing human judgment.
Fast-tracking the AI’s learning for new use cases
The first weeks after launching a new use case for your AI agent are critical. Even with rigorous testing, you won’t be 100% sure that it will perform well in the wild. You can’t always anticipate everything. This is the perfect time for an elevated state of human oversight.
A solution that allows your team to review and revise every response before it reaches a customer during this post-rollout period provides a crucial safety net. When this human oversight is paired with AI that learns from their feedback, you accelerate this initial optimization period. This type of AI agent quickly learns from your best human agents so you can be confident that it will be both reliable and safe.
A full suite of tools for testing and fine-tuning
Deploying updates you’re not entirely sure you can trust undermines safety and security. And when a change is necessary, every hour of delay creates additional risk. As fast as conditions change in customer service, you’ll need to be able to fine-tune, iterate, and release modifications to your AI agent without delay whenever the need arises. To do that, your contact center will need to be equipped with no-code tooling to avoid the complication of waiting on IT or development resources for every update. And they’ll need the tools to fully test before releasing any modifications.
Testing and simulation
Testing a generative AI agent can be tricky. It’s unscripted and doesn’t follow deterministic paths. The gold standard approach is to run realistic simulations of a wide range of scenarios, from simple to complex. That goes far beyond testing for specific questions or actions.
Look for a solution that includes a suite of testing tools that allow you to define simulations and automate testing of complete scenarios for different intents and customer personas, including API calls, knowledge retrievals, and multi-turn conversations. That’s as close as you can get to live operations—without risk. This kind of testing is a critical component of maintaining safety and security as you add use cases, modify policies, and update task instructions for your AI agent.
Optimization with no-code tooling
Deploying a generative AI agent isn’t a set-it-and-forget-it operation. It requires fine-tuning over time. If you have to depend on your IT or development resources every time you need to modify something, some of your anticipated ROI will evaporate with added costs, lost time, and a degraded customer experience. And if the need for fine-tuning is related to a safety or security issue, you’ll also incur increased risk.
No-code tooling empowers non-technical users in your customer service operation to monitor and optimize performance on their own. That leads to quicker adjustments and safer operations.
Raising the bar on AI safety and security for financial services
The first step in ensuring safety and security with an AI agent is to choose a vendor who’s built a solution with safety and security mechanisms embedded from the ground up, and by design. But your pursuit of safety and security shouldn’t stop there. To ensure safe and secure performance in the real world, your team must be empowered and equipped with the tools to monitor, optimize, and scale your AI agent on your terms and at your pace.
A solution provider who puts you in control every step of the way isn’t just asking you to trust them and their AI agent solution. They’re giving you a good reason to.
How ASAPP elevates precision and trust in AI-automated customer service
For most enterprises, the biggest hurdle in deploying a generative AI agent in the contact center isn’t cost. It’s finding an AI agent you can trust. We hear it over and over as we talk with companies in every industry. Everyone wants to automate customer service as much as possible, but only if they can trust the AI to serve customers safely and represent their brand faithfully.
From the very beginning, this is the kind of sticky problem ASAPP was created to address.
We built GenerativeAgent from the ground up as a true agentic platform for real-world customer service—with multi-turn reasoning, native speech orchestration, and enterprise-grade integrations from day one.
Now, we’ve made GenerativeAgent even smarter and safer with the addition of three new features that
- Make it safe to test and easy to course-correct
- Let you scale AI with human context and oversight
- Give you the tools to continuously improve performance
With these new features, enterprises can safely scale customer service automation without sacrificing performance, control, or trust.
Better testing for a safer launch and faster iteration
Building trust with an AI agent starts before launch with comprehensive testing to ensure accurate and reliable performance. But generative AI is probabilistic, which means its behavior is not scripted. That makes testing more complex than with a deterministic bot.
The most effective way to test a generative AI agent is with realistic simulated scenarios. Scenario testing goes beyond simple question-and-answer validation with complex real-world use cases involving complete, multi-turn conversations and API calls. This approach ensures that the AI agent provides correct information, adheres to policy, and delivers on-brand communication across divergent conversational paths.
To support scenario testing, we’ve added a full suite of simulation and testing tools to GenerativeAgent.
Mock APIs and configuration branches enable your team to iterate quickly and safely test actions—with no impact on live operations.
The simulation tools allow you to configure a wide range of test scenarios with different customer personas. For each persona, you can define the customer’s goals, the information they know, such as an account number, and even their personality and communication style. A chatty customer who’s confused about your policies is very different from an angry customer who communicates in short, terse sentences. You’ll want to be confident that you can trust GenerativeAgent to provide excellent on-brand service for both types of customers. With the new simulation tools, you can test and validate performance for these and countless other scenarios.
Using this suite of tools, you can:
De-risk rollouts: Quickly test how GenerativeAgent handles inputs and outputs, mimic backend systems without IT involvement, and validate workflows.
Optimize before go-live: Simulate real customer interactions with a range of customer personas to validate GenerativeAgent performance in advance and avoid surprises in production.
Ensure consistency and compliance: Automatically evaluate how well GenerativeAgent handles key workflows like refunds, account changes, or cancellations to identify off-brand responses before they reach customers.
Iterate safely without impacting live operations: Test, preview, and collaborate on updates in a protected environment to keep service continuity intact while you optimize.

Human supervision for accelerated training
GenerativeAgent already has an innovative Human-in-the-Loop Agent (HILATM) workflow to enable real-time human-AI collaboration. This feature allows GenerativeAgent to consult a human agent for guidance, information, or approvals, without transferring the customer.
Now, the Human-in-the-Loop Agent feature has a new option, Approver Mode. This option allows human agents to supervise and refine GenerativeAgent responses in real time before they’re delivered to customers—every response in every chat. Before any message from GenerativeAgent reaches a customer, an agent will approve, edit, or replace it.

This ensures safe, on-brand interactions from day one. But the bigger benefit is that it accelerates AI training. GenerativeAgent learns from every revision your agents make to its responses, so it quickly evolves and improves performance.
This approach delivers multiple benefits:
- Human oversight for every response: In the first weeks after launching a new use case, your best agents vet every message to ensure accuracy and relevance.
- Real-time correction of GenerativeAgent output: Human agents instantly fix errors and off-brand phrasing before the customer sees them.
- Continuous learning from top agents: GenerativeAgent improves by observing how expert agents edit or rewrite its responses.
- Boosted agent productivity: By focusing only on editing key moments instead of managing entire conversations, agents handle more sessions with greater efficiency.
- Coaching in action: HILAs serve as real-time mentors, shaping GenerativeAgent to emulate the tone, clarity, and empathy of your best performers.
The new HILA Approver Mode is ideal for piloting new use cases and achieving high-quality automated support with an added layer of safety. When you’ve reached a high degree of confidence in how GenerativeAgent is performing, you can switch out of Approver Mode to let GenerativeAgent work more independently and still keep a human in the loop for guidance and approvals as needed.
Real-time monitoring to drive fine-tuning
Performance reporting and analytics provide visibility into an AI agent’s accuracy and effectiveness. But real-time monitoring enables your team to identify and resolve potential issues quickly before they become big problems.
That’s why we’ve introduced Conversation Monitoring, which automatically tracks all of the interactions GenerativeAgent handles to detect and flag inconsistencies. It scores and categorizes each flagged conversation to help your team prioritize reviews. For high-impact inconsistencies, it sends an immediate alert to your quality team.

For each conversation, your team can dig into the details to see all actions GenerativeAgent took, which includes input, knowledge articles accessed, reasoning, API calls, and output back to the customer. In-line indicators within the conversation flow mark the inconsistent behavior that caused the interaction to be flagged.
Aggregate reports and search and filtering tools help your team identify trends, patterns, and anomalies more quickly.
These new features dramatically reduce the time your team spends reading conversations to pinpoint and diagnose problems, including knowledge base gaps, configuration issues, and task instructions that need refinement. As a result, they can fine-tune GenerativeAgent more quickly to ensure compliance and quality assurance at scale.
Raising the bar for AI-automated customer service
We created GenerativeAgent to solve one of the toughest issues in enterprise customer service—providing excellent customer experiences at scale with fewer labor hours and lower costs, without sacrificing customer satisfaction. It was specifically designed to resolve complex customer issues safely, with hyper-personalized interactions through voice and chat, even in high-stakes environments. It’s always been enterprise-ready from day one.
These new features represent a significant evolution toward an even bigger vision—a true agentic CX ecosystem where AI is self-improving, enterprise-aligned, and centered in your customer service operations.
That vision is only possible if enterprises can trust the AI. And trust requires precision, transparency, and oversight. With robust testing and simulation tools, increased human feedback to speed improvement, and real-time monitoring of all GenerativeAgent conversations, you get control, compliance and confidence. No safety compromises. No performance trade-offs.
That sets a new standard for trust in AI-automated customer service.
Prioritizing your AI investments: Augment agents or automate customer interactions?
Automation? Or augmentation?
As AI capabilities for customer service proliferate, it gets harder to decide which ones are worth the investment for your contact center. Broadly speaking, AI solutions for the contact center fall into one of two categories – those that support human agents in real time (augmentation) and those that engage directly with customers (automation).
The question of which category to emphasize in your contact center has shifted significantly since AI first emerged as a practical tool for customer service. Early on, the excitement around automation drove chatbot adoption, which soon yielded to disappointment when the bots frequently failed and frustrated customers.
In the past couple of years, the focus has shifted to augmentation, as CX tech providers added a variety of copilot capabilities to their platforms. The results with these agent augmentation offerings have been far more favorable, if modest. Most enterprises that have adopted them report at least small efficiency gains.
And now, the pendulum is swinging back toward automation as autonomous AI agents have rapidly emerged as viable additions to the contact center ecosystem.
That leaves customer service leaders with a tough decision about how to spend their technology budgets. Is it time to switch gears and prioritize automation over agent augmentation with your AI investment dollars? There’s no single choice that’s right for every enterprise. But your decision depends on several factors, including the mix of interaction types your contact center handles, the kinds of customers you serve, and the expected ROI of each investment.
The modest but reliable gains of agent augmentation
Until recently, investing in agent augmentation has been the much easier and far more reliable option. The market is awash in solutions, most of which are tailored to address specific tasks in agent workflows, like retrieving context-driven information from the knowledge base, suggesting greetings or closings for a live chat, or generating a post-interaction summary.
An AI capability that automates one of these tasks is sure to save agents a little time, without disrupting the rest of the workflow. That drives quick efficiency gains, which offset the cost of the technology when aggregated over thousands of agents.
The narrow focus of augmentation capabilities also makes them relatively easy to incorporate into your processes and technology ecosystem. Required integrations are limited. And for the most part, your agents can keep doing what they’ve always done, just faster. With a lower burden on your team, augmentation solutions can be deployed quickly, which means you start realizing value right away.
In the past few years, agent copilots have allowed contact centers to get the benefits of AI in a controlled internal environment. Agents act as a safety net to ensure that any inaccurate or misleading output from the AI doesn’t reach your customers. That’s given customer service leaders time to get comfortable with the growing presence of AI in their operations.
The limitations of augmentation
There are limitations with agent augmentation, though. The benefits of agent copilots are reliable, but the overall impact on the contact center’s operations is typically small. Automated greetings, for example, save agents a few seconds per chat. That creates a small bump in productivity, but does not significantly increase the contact center’s capacity to serve customers.
Given the relative maturity of real-time agent assistance, some tech providers have pushed the boundaries of these capabilities to significantly expand their impact. Instead of automating just standard greetings and closings, these innovators now also automate the complex middle of the conversation. That’s a much bigger time saver. And in addition to consistent free-text summaries of each interaction, some more advanced solutions also capture a range of custom data fields that drive downstream automation. With that in mind, it’s becoming increasingly important to take extra care in choosing agent augmentation solutions. The best-of-breed options deliver much bigger returns.

Even so, there’s a ceiling on those returns. Because augmentation keeps your customer service delivery highly dependent on human agents, its potential productivity gains are constrained by what those humans can do. The simple truth is that humans aren’t easily scaled. That limits both contact center capacity and the returns on your investment.
Overcoming these limitations requires automation.
Why AI agents deliver much bigger returns
The broad disappointment with traditional bots among both customers and enterprises made automation investments less appealing for a long time. Contact centers continue to use simple automation like IVRs and chatbots, but customer service leaders have come to recognize that they can only handle simple interactions. Once they’ve hit the ceiling on the interactions those automation solutions can contain, the need for agent augmentation grows more urgent.
But the possibilities for automation have changed with the rapid growth of generative AI. Today, AI agents are far more capable than deterministic bots. Some can already handle a wide range of customer issues on their own. And they’ll only get better as innovation and development continue.
The return on investment with a fully autonomous AI agent is many times greater than what you can achieve with agent augmentation. The reason is simple – it scales. When inbound volume rises, the agent scales to meet the demand. And if your business expands, your AI agent expands with it. That dramatically increases your contact center capacity without requiring additional headcount.
Already, AI agents are automating a wide range of interactions, from booking travel with complicated itineraries, investigating fraudulent transactions, and helping customers upgrade services. The best AI agents successfully resolve much more complex issues than traditional automation can handle, which drives containment higher while keeping costs down.
The technology is maturing rapidly. The best-of-breed solutions have successfully addressed early safety concerns and are continuing to simplify deployment with improved integration options, no-code tooling, and human-in-the-loop workflows that expand the AI agent’s capabilities and ensure human judgment where needed. A growing number of enterprises have moved past proof of concept to launch AI agents within their contact centers. They’re already realizing extraordinary value.
As AI solution providers continue to innovate, the potential use cases for AI agents will multiply and their performance will improve. Over time, AI agents will be capable of handling increasingly complex issues. This innovation is occurring at a blistering pace, so understanding the scope of automation that will be possible in the very near future can serve as a powerful guide for where to invest your AI budget today.
The challenges with deploying AI agents
While the returns on investments in AI agents are far greater than what you’ll gain with augmentation capabilities, it’s important to be realistic about the challenges of implementing them. Autonomous AI agents have far-reaching implications for your internal processes, staffing, and organizational structure. They’re the first step in upending the human-dependent model of customer service.
But that doesn’t mean humans are no longer needed. The most impactful AI agent deployments occur in enterprises that successfully reshape the human-AI relationship into a collaborative model. That requires redefining the role the humans play. Working directly with an AI agent as the human in the loop is a brand-new job function that demands a different skillset and modified workflows. You’ll need to be prepared to adapt quickly to the ripple effects of this shift.

There are challenges with data and technology, as well. Autonomous AI agents need access to the same systems that human agents use to resolve customer issues. That includes your knowledge base, CRM, and other systems of record. And while human agents can often work around inaccuracies or gaps in your knowledge base, an AI agent can easily be limited by them. That elevates the importance of knowledge base management, and the AI solution’s ability to handle such situations to “unblock” the AI. For other systems, such as those used to manage customer accounts, the AI agent will need APIs to access the tools and data it needs. That means your team, your technology provider, or an implementation partner will need to create the necessary APIs.
The implications for evaluating AI agent solutions are clear.
The vendor’s ability to simplify the deployment process and provide technical guidance are just as important as their solution’s capabilities. Equally important is how the AI system is designed to handle roadblocks—whether through UI features that allow human agents to step in and assist when needed or mechanisms that enable AI to learn and adapt from human input.
Striking the right strategic balance for your business
The precise balance of AI investments you should be making now depends on the industry you serve, the types of interactions your contact center handles, the expectations of your customers, and your overall vision for CX strategy.
For highly regulated industries, some types of interactions might require a human agent, so you’ll need to restrict your use of AI agents for compliance. But even in such industries as banking and insurance, many interactions can be automated safely with an autonomous AI agent. And with a solution that effectively incorporates a human in the loop for oversight and approvals, you can still gain the benefits of automation with an AI agent.
In general, if your contact center handles a high volume of transactional interactions, you’ll want to lean heavily on automation investments. On the other hand, if relationship-building is a large central component of your customer service, you’ll want to be choosier about which types of interactions you fully automate, and emphasize augmentation a bit more.
While there’s no one-size-fits-all decision on how to balance your AI investments, it is time to start shifting some of your dollars toward automation. Agent augmentation should still be in the mix. It provides reliable efficiency gains. But long term, automation clearly offers a much bigger payoff. With the technology rapidly improving and delivering growing returns, waiting too long to explore AI agent solutions could leave your organization playing catch-up.
Starting small, and starting early, will allow you to refine your approach, work through operational challenges, and position yourself ahead of competitors who delay adoption.
8 key questions to ask every generative AI agent solution provider
Get past the vague language
Every vendor who sells a generative AI agent for contact centers makes the same big claims about what you can achieve with their product – smarter automation, increased productivity, and satisfied customers. That language makes all the solutions sound pretty much the same, which makes a fair comparison more difficult than it ought to be.
If you want to get past the vague language, take control of the conversation by asking these key questions. The answers will help you spot the differences between solutions and vendors so you can make the right choice for your business.
Download the PDF version of this blog post.
1. What exactly does your AI agent do?
Some AI agents simply automate specific processes or serve up information and other guidance to human agents, while others can operate independently to talk to customers, assess their needs and take action to resolve their issues. Ask these questions to distinguish between them.
- Can your genAI agent handle customer interactions from start to finish on its own? Or does it simply automate certain processes?
- How do your agents use generative AI?
- What channels does your AI agent support?
Look for a solution that uses the full range of generative AI’s capabilities to power an AI agent that can work independently to fully automate some interactions across multiple channels, including voice. This type of agent can listen to the customer, understand their intent, and take action to resolve the issue.
2. Is there more to your solution than a LLM + RAG?
Retrieval augmented generation (RAG) grounds generative AI agents on an authoritative source, such as your knowledge base. That helps the solution produce more accurate and relevant responses. It’s a dramatic improvement that’s invited some to ask whether RAG and a foundational model is all you need. The simple answer is no. Ask these questions to get a fuller picture of what else a vendor has built into their solution.
- Which models (LLMs) does your solution use? And why?
- Besides a LLM and RAG, what other technologies does your solution include? And how is it structured?
- Will I get locked into using a specific LLM forever? Or is your solution flexible enough to allow changes as models evolve?
Look for a solution that uses and orchestrates a wide variety of models, and a vendor that can explain why some models might be preferred for certain tasks and use cases. In addition to the LLM and RAG, the solution should include robust security controls and safety measures to protect against malicious inputs and harmful outputs. The vendor should also offer flexibility in which models are chosen and should allow you to swap models later if another would improve performance.
3. How will your solution protect our data (and our customers’ data)?
Security is always a top concern, and generative AI adds some new risks into the mix, such as prompt injection, which could allow a bad actor to manipulate the AI into leaking sensitive data, granting access to restricted systems, or saying something it shouldn’t. Any AI vendor worth considering should have strong, clear answers to these security questions.
- How do you ensure that the AI agent cannot be exploited by a bad actor to gain unauthorized access to data or systems?
- How do you ensure that the AI agent cannot retrieve data it is not authorized to use?
- How does your solution maintain data privacy during customer interactions?
Look for a solution that can detect when someone is trying to exploit the system by asking it to do something it should not. It should also have strong security boundaries that limit the AI agent’s access to data (yours and your customers’). Security and authentication in the API layer are especially critical for protecting data. And all personal identifiable information (PII) should be redacted before data is stored.
4. How do you keep your AI agent from ticking off my customers or damaging my brand?
We’ve all heard stories of bots that spouted offensive language, agreed to sell pricey products for a pittance, or encouraged people to do unsafe things. Solution providers worth considering should have robust safety mechanisms built in to ensure that the AI agent stays on task, produces accurate information, and operates ethically. Get the details on how a vendor approaches AI safety with these questions.
- How do you mitigate and manage hallucinations?
- How do you prevent the AI agent from sharing misinformation with our customers?
- How do you prevent jailbreaking?
Look for a solution that grounds the AI agent on information specific to your business, such as your knowledge base, and includes automated QA mechanisms that evaluate output to catch harmful or inaccurate responses before they are communicated to your customer. The solution should also incorporate a variety of guardrails to protect against people who want to exploit the AI agent (jailbreaking). These measures should include prompt filtering, content filtering, models to detect harmful language, and mechanisms to keep the AI agent within scope.
5. How hard will the solution be to use and maintain?
Conditions in a contact center can change quickly. Product updates, new service policies, modified workflows, revised knowledge base content, and even shifts in customer behavior can require your agents to adapt – including your AI agents. Ask these questions to find out how well a solution empowers your team to handle simple tasks on their own, without waiting on technical resources.
- What kinds of changes and updates can our contact center team make to the solution without pulling in developers or other technical resources?
- What will it take to train our supervisors and other CX team members to work with this solution?
Look for a vendor who has invested in user experience research to ensure that their solution’s interfaces and workflows are easy to use. The solution should have an intuitive console that empowers non-technical business users with no-code tools to manage changes and updates on their own.
6. How will we know what the AI is doing – and why?
When a human agent performs exceptionally well – or makes a mistake – you can ask them to explain their reasoning. That’s often the first step in improving performance and ensuring they’re aligned with your business goals. It’s equally important to understand how an AI agent is making decisions. Use these questions to learn how a solution offers insight into the AI’s reasoning and decision-making.
- How will we know what specific tools and data the AI agent is using for each customer interaction?
- In what ways do you surface information about how the AI agent is reasoning and making decisions?
Look for a vendor who provides a high degree of transparency and explainability in their solution. The AI agent should generate an audit trail that lists all systems, data, and other information sources it has accessed with each interaction. In addition, this record should also include an easily understood explanation of the AI agent’s reasoning and decision-making at each step.
7. How does your solution keep a human in the loop?
Solution providers acknowledge the importance of keeping a human in the loop. But that doesn’t mean they all agree on what that human should be doing or how the solution should accommodate and enable human involvement. These questions will help you assess how thoroughly the vendor has planned for a human in the loop, and how well their solution will support a cooperative relationship between the AI and your team.
- What role(s) do the humans in the loop play? Are they involved primarily during deployment and training, or are they also involved during customer interactions?
- When and how does your genAI agent hand off an interaction to a human agent?
- Can the AI agent ask the human agent for the input it needs to resolve the customer’s issue without handing over the interaction to the human?
- What kind of concurrency can we expect with a human in the loop?
Look for a solution with an intuitive interface and workflow that allows your human agent to provide guidance to the AI agent when it gets stuck, make decisions and authorize actions the AI agent is prohibited from doing on their own, and step in to speak with the customer directly as needed. The AI agent should be able to request guidance and then resume handling the interaction. The solution should be flexible enough to easily accommodate your policies for when the AI agents should ask its human coworker for help.
8. Why should we trust your team?
Trust depends on a number of factors, but it starts with expertise. What you really need to know is whether a vendor has the expertise to deliver a reliable solution now – and continue improving it for the future. These questions will help you determine which solution providers are best equipped to keep up with the pace of innovation.
- What components of your solution were developed in-house vs. acquired from third-parties?
- What kind of validation can you share from third-parties?
- Can you point me to your team’s research publications and patents?
Look for a vendor with a strong track record of in-house development and AI innovation. That experience is a good indicator of the vendor’s likelihood of continuing to expand their products’ capabilities as AI technologies evolve. Patents, published research, and third-party validation from industry experts and top-tier analysts underscore the vendor's expertise.
This list of questions is not exhaustive. There’s a lot more you could – and should – ask. But it’s a good start for rooting out the details you’ll need to make a fair comparison of generative AI agents.
Beyond optimization: 5 steps to AI that solves customer problems
Path toward a reimagined contact center
The state of AI in contact centers is at a critical juncture. Generative and agentic AI have forever altered the CX tech landscape and presented a new set of choices for customer service leaders. After incorporating a bevy of AI solutions to improve efficiency in recent years, they now face a fork in the road. Down one path is the familiar strategy of continuing to optimize existing processes with AI. This path has its charms. It’s well-trod and offers predictable rewards.
The other path is new, only recently created by the rapid evolution of generative and agentic AI. This path enables bold steps to radically transform the way the contact center operates. It might be unfamiliar, but it leads to spectacular benefits. Instead of incremental improvements with basic automation and agent support, it offers a more substantive transformation with generative AI agents that are capable of resolving customer issues independently.
At a recent Customer Contact Week (CCW) event, Chris Arnold, VP of Contact Center Strategy for ASAPP joined Wes Dudley, VP of Customer Experience for Broad River Retail (Ashley Furniture) to discuss this fork in the road and what it takes to travel the new path created by generative and agentic AI. Their conversation boiled down to several key points that translate into straightforward steps you can take now to start down the path toward a reimagined contact center that delivers much bigger benefits for the business.
You can also listen to the full conversation moderated by CCW's Managing Director of Events, Michael DeJager.
Step #1: Understand your customer journeys and pinpoint what’s not working
Up to this point, the primary goal for AI in the contact center has been to make existing processes faster and more efficient. While efficiency gains provide incremental benefits to the bottom line, they often do little to improve the customer experience. Simply swapping out your current tech for generative AI might buy you yet another small efficiency gain. But it won’t automatically improve the customer’s journey.
A better approach is to incorporate generative and agentic AI solutions where they can make a more significant impact. To do that, you have to pinpoint where the real problems are in your end-to-end customer journeys. That’s why mapping those journeys is a critical first step. As Wes Dudley explained,
One of the first things we did is start customer journey mapping to understand the points in our business of purchase, delivery, repair, contacting customer service. With that journey mapping with all of our leaders, we were able to set the roadmap for AI.
By identifying the most common pain points and understanding where and why customer journeys fail, you can explore how generative and agentic AI might be able to address those problem areas, rather than simply speeding everything up. As a first step, you don’t have to map everything in excruciating detail. You just need to identify specific issues that generative and agentic AI can solve in your customer experience. Those issues are your starting point.
Step #2: Make your data available for AI
There’s a lot of focus on making your data AI-ready, and that’s crucial. But too many customer service leaders interpret that message to mean that their data must be pristine before they can count on generative AI to use it well. There are two problems with that interpretation. First, it creates a roadblock with a standard for data integrity that is both impossibly high and unnecessary. The most advanced AI solutions can still perform well with clean but imperfect data.
The second problem with this narrow focus on data integrity is that it overlooks the question of data availability. An AI agent, for example, must be able to access your data in order to use it. As Chris Arnold noted,
We're finally to a place where if you think about the agents' work and the conversations that they manage, agentic AI can now manage the vast majority of the conversation, and the rest of it is, how can I feed the AI the data it needs to really do everything I'm asking my human agents to do?
Ensuring that your data is structured and complete is only part of the availability equation. You’ll also need to focus on maintaining integrations and creating APIs, which will allow AI solutions to access other systems and data sources within your organization to gather information and complete tasks on behalf of your agents and customers. By all means, clean up your data. At the same time, make sure you have the infrastructure in place to make that data available to your AI solutions.

Step #3: Align stakeholders and break down silos
AI implementation isn’t just about technology—it’s also about people and processes. It’s essential to align all stakeholders within your organization and break down silos to ensure a unified approach to AI adoption. As Chris Arnold explained, “Historically, we've [customer service] kind of operated in silos. So you have a digital team that was responsible for chat, maybe for the virtual assistant, but you've got a different team that's responsible for voice. And you create this fragmented customer experience. So as you're laying out the customer journey, begin with the customer in mind, and say, what are all the touch points? Include the website. Include the mobile app. Include the IVR. We no longer have to operate in silos. We shouldn't think of voice versus digital. It's just one entry point for the customer.”
If your goal is to continue optimizing existing processes with AI point solutions, then aligning stakeholders across the entire customer journey is less critical. You can gain efficiencies in specific parts of your process for digital interactions without involving your voice agents or the teams that support your website and mobile app. But if your goal is to achieve more transformative results with generative and agentic AI, then a holistic strategy is paramount. You’ll need to bring together all of your stakeholders to identify the key touchpoints across the customer journey and ensure that AI is integrated into the broader business strategy. This collaboration will help ensure that AI is used to complement existing technologies and processes in a way that yields measurable results for both the bottom line and the customer experience.
Step #4: Embrace the human-AI collaboration model
Much of the work that AI currently performs in contact centers is a supporting role. It offers information and recommendations to human agents as they handle customer interactions. That improves efficiency, but it doesn’t scale well to meet fluctuating demand.
One of the most exciting developments in AI for customer service flips the script on this dynamic with AI agents that handle customer interactions independently and get support from humans when they need it. ASAPP’s GenerativeAgent® can resolve a wide range of customer issues independently through chat or voice. It’s also smart enough to know when it needs help and how to ask a human agent for what it needs so it can continue serving the customer instead of handing off the call or chat.
“We are of the mindset that, without exaggeration, generative agents can replace 90% of what humans do – with supervision,” says Arnold. “So maybe you don't want your customers to be able to discontinue service without speaking to a human. GenerativeAgent can facilitate the conversation… but it can come to the human-in-the-loop agent and ask for a review so that the [AI agent] doesn't get stuck like it does today and then automatically escalate to an agent who has to then carry on the full conversation. We can now commingle the [GenerativeAgent] technology, the GenerativeAgent with the human, and you can have just about any level of supervision.”
Right now, we have AI that supports human agents. As we move forward, we’ll also have humans who support AI agents. As the human-AI balance shifts toward a more collaborative relationship, we’ll see radical changes in processes, workflows, and job functions in contact centers. The sooner you embrace this human-AI collaboration model, the better equipped you’ll be for the future.
Step #5: Get started now
The future of customer service won’t just be elevated by AI. It will be completely redefined by it. Contact centers will look – and function – very differently from the way they do now. And this future isn’t far away. We’re already at the fork in the road where you have a clear choice: stick with the familiar strategy of using AI to optimize existing processes, or take steps toward the future that generative and agentic AI have made possible. The path is there. It’s just a matter of getting started. You don’t have to do it all at once. You can go one step at a time, but it’s time to take that first step.
As Chris Arnold said at CCW,
Do it now. Don’t wait. Don’t be intimidated. Start now. Start small because all of us who have worked in the contact center for a long time, we know that small changes can lead to great big results. Just start now.
Is the human in the loop a value driver? Or just a safety net?
The latest crop of AI agents for the contact center can engage in fluid conversation, use reasoning to solve problems, and take action to resolve customers’ issues. When they work in concert with humans, their capabilities are maximized. That makes the human in the loop a critical component of any AI agent solution – one that has the potential to drive significant value.
Most solution providers focus on the human in the loop as both a safety measure and a natural escalation point. When the AI fails and cannot resolve a customer’s issue, it hands the interaction to a human agent.
Many contact center leaders see this approach as appropriately cautious. So, while they steadily expand automated self-service options, they tend to keep human agents front and center as the gold standard for customer service.
But here’s the catch: It also imposes significant limitations on the value AI agents can deliver.
Fortunately, there’s a better approach to keeping a human in the loop that drives the value of an AI agent instead of introducing limitations.
The typical human-in-the-loop roles
You probably won’t find a solution provider who doesn’t acknowledge the importance of having a human in the loop with a generative AI agent. But that doesn’t mean they all agree on exactly what that human should be doing or how the solution should enable human involvement. For some, the human in the loop is little more than a general assurance for CX leaders that their team can provide oversight. Others use the term for solutions in which AI supports human agents but doesn’t ever interact with customers.
Beyond these generalities, most solutions include the human in the loop in one or more of these roles:
- Humans are directly involved in training the AI. They review performance and correct the solution’s output during initial training so it can learn and improve.
- Humans continue to review and correct the AI after deployment to optimize the solution’s performance.
- Humans serve as an escalation point and take over customer interactions when the AI solution reaches the limits of what it can do.
The bottleneck of traditional escalation
Involving members of your team during deployment and initial training is a reliable way to improve an AI agent’s performance. And solutions with intuitive consoles for ongoing oversight enable continued optimization.
But for some vendors, training and optimizing the AI is largely where the humans’ role ends. When it comes to customer interactions, your human agents are simply escalation points for when the AI agent gets stuck. The customer experience that generates is a lot like what happens when a traditional bot fails. The customer is transferred, often into a queue where they wait for the next available agent. The human in the loop is just there to pick up the pieces when the AI fails.
This approach to hard escalations creates the same kind of bottlenecks that occur with traditional bots. It limits containment and continues to fill your agents’ queues with customers who have already been let down by automation that fails to resolve their issue.
The incremental improvements in efficiency fall short of what could be achieved with a different human-AI relationship and an AI agent that can work more independently while maintaining safety and security.
Redefining the role of the human in the loop
The first step to easing the bottlenecks created by hard escalations is to redefine the relationship between humans and AI agents. We need to stop treating the humans in the loop as a catch-all safety net and start treating them as veteran agents who provide guidance to a less experienced coworker. But for that to work, the AI agent must be capable of working independently to resolve customer issues, and it has to be able to ask a human coworker for the help it needs.
With a fully capable autonomous AI agent, you can enable your frontline CX team to work directly with the AI agent much as they would with a new hire. Inexperienced agents typically ask a supervisor or more experienced colleague for help when they get stuck. An AI agent that can do the same thing is a more valuable addition to your customer service team than a solution that’s not much more than a better bot.
This kind of AI agent is able to enlist the help of a human whenever it
- Needs to access a system it cannot access on its own
- Gets stuck trying to resolve a customer’s issue
- Requires a decision or authorization by policy
The AI agent asks the human in the loop for what it needs – guidance, a decision, information it cannot access, or human authorization that’s required by policy. Once the AI agent receives what it needs, it continues handling the customer interaction instead of handing it off. For added safety, the human can always step in to speak with the customer directly as needed. And a customer can also ask to speak to a human instead of the AI agent. In the ideal scenario, you have control to customize the terms under which the AI agent retains the interaction, versus routing the customer to the best agent or queue to meet their needs.
Here is what that could look like when a customer calls in.
The expansive value of human-AI collaboration
With this revised relationship between humans and AI agents, the human in the loop amplifies the impact of the AI agent. Instead of creating or reinforcing limitations, your human agents help ensure that you realize greater value from your AI investments with these key benefits:
1. Faster resolution times
When an AI agent can request and get help – and then continue resolving the customer’s issue – customers get faster resolutions without transfers or longer wait times. That improves First-Contact Resolutions (FCR) and gets customers what they need, faster.
2. More efficient use of human agents
In the traditional model, human agents spend a lot of time picking up the pieces when AI agents fail. With a collaborative model, agents can focus on higher-value tasks, such as handling complex or sensitive issues, resolving disputes, or upselling services. They are not bogged down by routine interactions that the AI can manage.
3. Higher customer satisfaction
Customers want quick resolutions without a lot of effort. Automated solutions that cannot resolve their issues leave customers frustrated with transfers, additional time on hold, and possibly having to repeat themselves. An AI agent that can ask a human coworker for help can successfully handle a wider range of customer interactions. And every successful resolution improves customer satisfaction.
4. Scalability without compromising quality
The traditional model of escalating to humans whenever AI fails simply doesn't scale well. By shifting to a model where AI can consult humans and continue working on its own, you ensure that human agents are only involved when they are uniquely suited to add value. This makes it easier to handle higher volumes without sacrificing quality or service.
5. Continuous learning to optimize your AI agent
Interactions between the AI agent and the human in the loop provide insights on the APIs, instructions, and intents that the AI needs to handle similar scenarios on its own in the future. These insights create the opportunities to continue fine-tuning the AI agent’s performance over time.
Generating value with the human in the loop
By adopting a more collaborative approach to the human-AI relationship, contact centers can realize greater value with AI agents. This new model allows AI to be more than just another tool. It becomes a coworker that complements your team and expands your capacity to serve customers well.
The key to implementing this approach is finding an AI solution provider that has developed an AI agent that can actively collaborate with its human coworkers. The right solution will prioritize flexibility, transparency, and ease of use, allowing for seamless integration with your existing CX technology. With this type of AI agent, the humans in the loop do more than act as a safety net. They drive value.
Have we missed the point of empathy in CX?
Empathy in customer service doesn’t always look the way we expect. Sometimes it wears a disguise.
A few years ago, I bought my daughter a new mobile phone for Christmas. She planned to make a 3-hour drive, mostly through rural areas, to visit her cousins the next morning. We needed to get the phone working before she pulled out of the driveway.
But nothing we tried did the trick. So, late in the afternoon on Christmas day, we needed customer support.
Our service provider’s website offered two options, phone or chat. I hate chat for support, but the wait time for a phone call was more than I could commit to while getting ready for a family dinner. So, I fired up the chat and asked for help. At first, I wasn’t sure whether it was a bot or a human. I didn’t care either way as long as we got the phone working. Over nearly two hours, I alternated between the chat and my family. And in the end, the problem was fixed.
What does empathy actually look like in customer experience?
I don’t recall the agent (human after all) saying anything particularly compassionate. And yet, this was one of the most empathetic customer service experiences I’ve ever had. Here’s why:
- The interaction resolved my problem on my schedule without requiring a call or visit to the store.
- I had a clear choice between phone and chat and knew the current wait times.
- I got the problem resolved without missing Christmas dinner with my family.
The bottom line is that my service provider gave me options for how to engage and a convenient way to get what I needed.
This is how empathy sometimes wears a disguise. It masquerades as efficiency, convenience, and ease.
In an industry hyper-focused on the emotional side of empathy, we too often overlook this crucial practical side. But we shouldn’t. It matters to customers, a lot.
- 93% of customers expect their issue to be resolved on the first call
- 62% of customers would prefer to “hand out parking tickets” than wait in an automated phone tree for service or have to repeat themselves multiple times to different team members
- 80% of American consumers say efficiency is the most important factor in the customer experience, and more than half say it’s worth paying more for
The often-overlooked practical side of empathy in CX
In recent years, the CX industry has focused intently on empathy. Businesses spend time and resources to upskill agents on active listening, emotional intelligence, and expressing care and compassion. They even provide lists of empathetic phrases their agents can use. And a growing number of contact centers use AI to detect customer sentiment throughout each interaction. All of that is great. It reminds the agents that customers are human, too, and they need to hear that someone cares about their problem.
Validating a customer’s feelings is an important component of putting empathy into practice. But it’s only one component.
Caring alone doesn’t resolve a customer’s issue, and it doesn’t automatically make the process of reaching a resolution easy or convenient.
Long wait times, multiple interactions, and chatbot failures are not empathetic. Many CX leaders view those points of friction through the lens of contact center efficiency with metrics like transfer rates and digital containment. But friction also increases customer effort, which is an important component of empathy in CX. And too many contact centers deliver experiences that require a lot of customer effort – ineffective self-service, complicated IVR menus, disconnected channels, and more. An agent who says they understand your frustration can’t erase all that effort and wasted time.
Empathy in CX strategy: Are we making it too complicated?
The concept of empathy is somewhat vague and squishy, so it’s not surprising that CX leaders sometimes convert it into something else when crafting CX strategy. The problem is, they often convert empathy into the equally vague concept of customer-centricity. What does that mean? Keeping the customer front and center at all times, sure – but how? It isn’t always clear how centering the customer translates into actions and processes for the contact center to follow.
The vague nature of both empathy and customer-centricity tends to give rise to complex frameworks that attempt to make the strategy more concrete. For example, a framework might categorize elements in the CX ecosystem into systems of listening, understanding, action, and learning. Those frameworks can help shape perspectives within your business, but they still require additional translation to make them actionable for your frontline CX team.
Here’s a simpler approach. Embedding empathy into your CX strategy means consistently aiming to do these four things:
- Resolve the customer’s issue in the first interaction.
- Take up as little of the customer’s time as possible.
- Make the entire process easy and convenient.
- Treat your customers and employees like the human beings they are.
Getting to the point of empathy with generative AI
In contact centers, early AI implementations increased efficiency, but employees felt the impact more than customers. In some cases, AI deployments actually increased frustration by raising customers’ hopes with big promises of faster, more convenient service that didn’t ever materialize. Consider chatbots. Even with improved language processing, bots can’t take action to resolve a customer’s issue. So, they require time and effort from the customer but often, can’t truly help. When it comes to the practical side of empathy, they fail to deliver.
But that was then, and this is now. The technology has matured, and current implementations of generative AI are improving contact centers’ performance on both the emotional and practical sides of empathy. AI solutions increasingly take over repetitive and time-consuming tasks, freeing agents to focus more effectively on the customers they’re serving. This shift makes space to engage with more empathy across the board.
Customer-facing AI agents will generate a larger, even seismic, shift in how empathy is embedded into customer experiences. Generative AI agents can listen, understand, problem-solve, and take action to resolve customers’ issues. That ticks all the empathy boxes for me. This massive leap forward lays the groundwork for CX leaders to shift the emphasis of their AI investments toward solutions that do more than talk in a natural way.
Practical empathy that’s just a chat or call away
That Christmas a few years ago when I needed customer service, I didn’t care whether I chatted with a human or AI. I just wanted my problem resolved before my daughter left town, preferably without having to call or visit the store the next day. I got lucky that time. My service provider had agents available. But we all know that’s not always the case. With a generative AI agent ready to respond 24/7/365, the customer’s luck never runs out. Effective, efficient, and convenient service will always be just a call or chat away. For me, that’s the part of empathy in CX that too many businesses are missing today. But I suspect that’s about to change.
A new era of unprecedented capacity in the contact center
Ever heard the phrase, "Customer service is broken?"
It's melodramatic, right? —something a Southern lawyer might declaim with a raised finger. Regardless, there’s some truth to it, and the reason is a deadly combination of interaction volume and staffing issues. Too many inbound interactions, too few people to handle them. The demands of scale do, in fact, break customer service.
This challenge of scaling up is a natural phenomenon. You find it everywhere, from customer service to pizza parlors.
Too much appetite, too little dough
If you want to scale a pizza, you have to stretch the dough, but you can't stretch it infinitely. There’s a limit. Stretch it too far, and it breaks.
Customer service isn't exactly physical, but physical beings deliver it— the kind who have bad days, sickness, and fatigue. When you stretch physical things too far (like balloons, hamstrings, or contact center agents), they break. In contact centers, broken agents lead to broken customer service.
Contact centers are currently stretched pretty thin. Sixty-three percent of them face staffing shortages. Why are they struggling? Some cite rising churn rates year after year. Others note shrinking agent workforces in North America and Europe. While workers flee agent jobs for coding classes, pastry school, and duck farming, customer request volumes are up. In 2022, McKinsey reported that 61% of customer care leaders claimed a growth in total calls.
To put it in pizza terms (because why not?), your agent dough ball is shrinking even as your customers' insatiable pizza appetite expands.
What’s a contact center to do? There are two predominant strategies right now:
- reduce request volumes (shrink the appetite)
- stretch your contact center’s service capacity (expand the dough)
Contact centers seem intent on squeezing more out of their digital self-service capabilities in an attempt to contain interactions and shrink agent queues. At the same time, they’re feverishly investing in technology to expand capacity with performance management, process automation and real-time agent support.
But even with both of these strategies working at full force, contact centers are struggling to keep up. Interaction volume continues to increase, while agent turnover carries on unabated. Too much appetite. Not enough dough to go around.
How do we make more dough?
Here’s the harsh reality – interaction volume isn’t going to slow down. Customers will always need support and service, and traditional self-service options can’t handle the scope and complexity of their needs. We’ll never reduce the appetite for customer service enough to solve the problem.
We need more dough. And that means we need to understand the recipe for customer service and find a way to scale it. The recipe is no secret. It’s what your best agents do every day:
- Listen to the customer
- Understand their needs
- Propose helpful solutions
- Take action to resolve the issue
The real question is, how do we scale the recipe up when staffing is already a serious challenge?
Scaling up the recipe for customer service
We need to scale up capacity in the contact center without scaling up the workforce. Until recently, that idea was little more than a pipe dream. But the emergence of generative AI agents has created new opportunities to solve the long-running problem of agent attrition and its impact on CX.
Generative AI agents are a perfect match for the task. Like your best human agents, they can and should listen, understand, propose solutions, and take action to resolve customers’ issues. When you combine these foundational capabilities into a generative AI agent to automate customer interactions, you expand your contact center’s capacity – without having to hire additional agents.
Here’s how generative AI tools can and should help you scale up the recipe for customer service:
- Generative AI should listen to the customer
Great customer service starts with listening. Your best agents engage in active listening to ensure that they take in every word the customer is saying. Transcription solutions powered by generative AI should do the same. The most advanced solutions combine speed and exceptional accuracy to capture conversations in the moment, even in challenging acoustic environments.
- Generative AI should understand the customer’s needs
Your best agents figure out what the customer wants by listening and interpreting what the customer says. An insights and summarization solution powered by generative AI can also determine customer intent, needs, and sentiment. The best ones don’t wait until after the conversation to generate the summary and related data. They do it in real time.
- Generative AI should propose helpful solutions
With effective listening and understanding capabilities in place, generative AI can provide real-time contextual guidance for agents. Throughout a customer interaction, agents perform a wide range of tasks – listening to the customer, interpreting their needs, accessing relevant information, problem-solving, and crafting responses that move the interaction toward resolution. It’s a lot to juggle. Generative AI that proposes helpful solutions at the right time can ease both the cognitive load and manual typing burden on agents, allowing them to focus more effectively on the customer.
- Generative AI should take action to resolve customers’ issues
This is where generative AI combines all of its capabilities to improve customer service. It can integrate the ingredients of customer care—listening, understanding, and proposing—to safely and autonomously act on complex customer interactions. More than a conversational bot, it can resolve customers’ issues by proposing and executing the right actions, and autonomously determining which backend systems to use to retrieve information and securely perform issue-resolving tasks.
Service with a stretch: Expanding your ball of dough
Many contact centers are already using generative AI to listen, understand, and propose. But it’s generative AI’s ability to take action based on those other qualities that dramatically stretches contact center capacity (without breaking your agents).
A growing number of brands have already rolled out fully capable generative AI agents that handle Tier 1 customer interactions autonomously from start to finish. That does more than augment your agents’ capabilities or increase efficiency in your workflows. It expands your frontline team without the endless drain of recruiting, onboarding, and training caused by high agent turnover.
A single generative AI agent can handle multiple interactions at the same time. And when paired with a human agent who provides direct oversight, a generative AI agent can achieve one-to-many concurrency even with voice interactions. So when inbound volume spikes, your generative AI agent scales up to handle it.
More dough. More capacity. All without stretching your employees to the breaking point. For contact center leaders, that really might be as good as pizza for life.