Beyond AI agents: the next CX horizon begins September 18th
Save Your Seat
ASAPP logo icon.
👋 Want to talk to our generative AI agent?
Click below to experience GenerativeAgent in action
Talk to GenerativeAgent: Try it now
Learn more about GenerativeAgent first
I’m interested in a demo

Stay up to date

Sign up for the latest news & content.

Published on
September 11, 2025

7 questions on selecting best AI customer service agent use cases

Theresa Liao
Director of Content and Design
4 minutes

AI customer service agents hold the promise to finally address long-standing contact center challenges by delivering faster resolution, reducing handling times, and driving greater efficiency.

But Gartner recently estimated that more than 40% of agentic AI projects will be scrapped by the end of 2027. The report emphasizes that the success or failure of these projects hinges not on the technology itself, but on selecting the right use cases that deliver value and ROI. Choosing wisely ensures early wins and helps avoid costly missteps that can derail broader adoption.

“To get real value from agentic AI, organizations must focus on enterprise productivity, rather than just individual task augmentation.” —Anushree Verma, Senior Director Analyst at Gartner.

Here are seven questions to help guide your selection of the best AI customer service agent use cases

1. Why is it important to start with the right use cases?

Starting with the right AI customer service agent use cases is critical for building trust and momentum. Early wins generate excitement and internal buy-in, proving the value of the technology. If the first attempt fails—even because of poor selection rather than limitations of the AI—it can derail future initiatives.

For smaller organizations especially, demonstrating value early on helps justify the resources needed for more advanced integrations or new APIs. Some companies begin with knowledge base scenarios, which offer relatively quick wins and help establish early confidence in AI customer service agents.

2. What are the most common misconceptions about AI customer service agent use cases?

A frequent misconception is that AI customer service agents can handle “everything under the sun.” In reality, they should be reserved for problems that traditional IVRs or virtual agents cannot address. Applying them to simple, well-covered issues only adds complexity and cost.

Another misconception is that high call or chat volume automatically makes a use case a good candidate. Volume alone doesn’t guarantee success. The better approach is to look for issues customers want to solve independently but currently cannot, where AI customer service agents can unlock real resolution.

3. Why do these misconceptions persist?

Misconceptions often stem from two sources: vendor hype and internal wishful thinking. The rapid pace of AI innovation amplifies these expectations, making it essential to separate hype from reality before selecting use cases.

4. What prerequisites should organizations have in place?

Organizations need a clear understanding of what’s happening in their calls and chats before selecting use cases for AI customer service agents. This includes knowing which issues escalate to live agents, what actions agents typically take, and what questions arise most often during escalations.

This information ensures that use cases are both frequent enough to matter and solvable with existing APIs, knowledge, or newly built tools. Discoverability analyses can support this effort, especially when there’s access to large volumes of conversation data.

It’s also important to involve someone with deep API expertise early on. A common misconception is that you can simply plug APIs and a knowledge base into an AI customer service agent and expect it to perform. In reality, thoughtful preparation and integration work are essential.

5. What distinguishes good from bad use cases?

Good AI customer service agent use cases typically share these characteristics:

  • Traditional virtual agents cannot handle the complexity due to many possible API outcomes.
  • Broad policies with nuanced exceptions require human-like reassurance.
  • Conversations follow non-linear paths, with questions branching in multiple directions.
  • There is a clear path to resolution, minimizing abandonment risk.

By contrast, bad AI customer service agent use cases often involve:

  • A lack of content or policy information to support answers.
  • Problems that fundamentally require human intervention, such as physical device takeover.
  • Flawed underlying processes that AI cannot fix.
  • Customer demands where the only acceptable outcome is a refund, discount, or cancellation, which will likely push the customer to insist on speaking with a human.

6. How should success be measured?

Measuring success begins with safety and accuracy. Early implementations of AI customer service agents should focus on mistake monitoring to ensure the system behaves as intended.

Once safety is confirmed, success should be evaluated by resolution: Are customer issues being fully resolved? If not, is the AI significantly reducing handle time through human-in-the-loop scenarios? Importantly, success should not be confused with deflection. A customer hanging up or escalating to a live agent without resolution does not count as success.

When metrics fall short, the issue may lie in the use case selection itself or may require optimization—such as adding content, refining workflows, or adjusting the level of human involvement.

7. What are “human-in-the-loop” use cases?

Human-in-the-loop use cases involve collaboration between AI and humans at critical points in the workflow. At ASAPP, our unique Human-in-the-Loop Agent (HILA™) workflow focuses on human agents supporting AI, instead of traditional contact center escalation. Here are two scenarios that HILA use cases could

  • System Access Limitations: The AI identifies an issue but does not have permission to resolve it. A human steps in briefly, after which the AI agent continues the interaction.
  • Business Decisions: Even if AI can handle the initial steps, organizations may require a human to make final decisions on sensitive actions. In these cases, the AI agent gathers and processes information, and a human agent makes the final judgment.

This model underscores that AI is not replacing human agents, but working alongside them. It also has important implications for workforce management, requiring thoughtful design of human-AI collaboration.

Conclusion

Selecting the right AI agent use cases for your contact center is an important first step in any deployment. It requires preparation, clear evaluation criteria, and a commitment to separating real opportunities from hype. While it may seem like additional upfront work, the payoff is enormous. AI customer service agents are not “set it and forget it,” but when implemented thoughtfully, they deliver value and ROI that far outweigh the effort.

Stay up to date

Sign up for the latest news & content.

Loved this blog post?

About the author

Theresa Liao
Director of Content and Design

Theresa Liao leads initiatives to shape content and design at ASAPP. With over 15 years of experience managing digital marketing and design projects, she works closely with cross-functional teams to create content that helps enterprise clients transform their customer experience using generative AI. Theresa is committed to bridging the gap between complex knowledge and accessible digital information, drawing on her experience collaborating with researchers to make technical concepts clear and actionable.