Gina Clarkin

Gina Clarkin is a product marketing manager at ASAPP. She works to bring advanced technologies to market that help companies better solve real-world problems. Prior to joining ASAPP, she honed her product marketing craft at tech companies with firmware, wireless, and contact center solutions.
Get critical visibility into GenerativeAgent behavior with Conversation Explorer
AI-driven customer interactions are becoming the new standard in customer service. But how do you really know if your AI agent is doing a good job? Are its responses always accurate? Is it truly understanding customers and resolving their issues? Ensuring consistent quality across all those automated conversations can feel daunting. That's where Conversation Explorer and Conversation Monitoring come in. Together, these tools are designed to give you a clear picture of your AI-powered customer service and the confidence that GenerativeAgent is delivering quality at scale.
The visibility challenge
While many providers offer “life-like” or “concierge” AI agent experiences, there are significant challenges that are unique to ensuring these new autonomous solutions behave as intended, especially when it comes to brands entrusting them with their customer experience. We’ve discussed safety in previous blogs, so let’s focus on the visibility challenge here.
- The "Black Box" Problem: Many AI agents, especially those driven by large language models (LLMs), operate as "black boxes," making it difficult to understand their decision-making processes.
- Complexity of Agent Workflows: Agent workflows often involve multi-step processes like LLM calls, retrieval-augmented generation (RAG), and calls to external APIs, making tracing challenging.
- Monitoring and Observability Challenges: Monitoring gaps arise when monitoring tools aren't unified across all components, making it difficult to diagnose issues.
- Silent Failures: Without proper monitoring and notification, potential problems can go undetected, and undiagnosed, leading to "silent failures" until they cause major issues. (Not only will it create a bad experience for one customer, but because it's going unnoticed, it will likely cause similar issues for other customers.)
- Trust and User Understanding: It's hard for users to verify what AI agents are doing, and even if that information is surfaced, it may be difficult for non-experts to understand the agent's "thought process". This can lead to a lack of trust.
- Auditability Issues: Tracing and auditing agent actions can be difficult, even with autonomous agents that perform complex tasks involving human oversight.
To help you avoid the visibility challenge, Conversation Explorer isn't just a basic log of interactions. It's a way to truly see how your GenerativeAgent handles conversations, offering a detailed view of each one to help you continuously improve the automated experience for customers. When paired with Conversation Monitoring, it forms a robust system for ensuring quality.
See what's happening: Transparency and context
At its core, Conversation Explorer lets you look at every conversation transcript alongside GenerativeAgent’s thought process. This includes seeing every action the AI takes, which gives you useful context. This transparency is key for identifying potential issues and fine-tuning your GenerativeAgent configuration to improve the customer experience.
With this, you can:
- Better Understand Customers: By seeing real conversations, you can get a better handle on what customers need and prefer.
- Improve Service Quality: The tool helps you pinpoint specific areas where your customer service interactions could be more effective.
- Make Decisions with Data: Having a thorough look at conversations helps you make informed choices based on actual performance.
Understand model actions: AI decision-making
A helpful feature in Conversation Explorer is its ability to show you how your GenerativeAgent makes decisions and uses the tasks, knowledge bases, and APIs you've set up. You can turn on specific model actions from the control panel to follow the AI's reasoning process and see its decision points within the conversation flow.
Need more detail? Just click on an inline model action. This brings up more information, like the full response from a function. This offers deep insight into the specifics of each AI action right within the conversation, helping you better understand how GenerativeAgent processes information and responds.
Find issues: confidence in quality with Conversation Monitoring
Conversation Explorer works in concert with Conversation Monitoring to deliver quality assurance at scale. Conversation Monitoring works automatically in real-time behind the scenes, checking 100% of your GenerativeAgent's interactions for suspected inconsistencies. When a high-impact anomaly is detected, it flags the conversation and sends a real-time alert to notify the right team members.
These real-time alerts link directly to the interaction, allowing team members to jump straight into the flagged conversation in Conversation Explorer for quick review to determine if action is required. From Conversation Explorer, you can:
- Quickly Locate Issues: Filter to find conversations that have been flagged for review.
- Understand Why It Was Flagged: The "Quality tab" allows you to review all interactions flagged within a conversation and navigate directly to that specific point in the transcript. This helps you diagnose what happened and why it was flagged by the monitoring system. You can then check if the flagged item was truly an error and understand the reasoning behind the GenerativeAgent's actions. Conversation Monitoring assesses issues like appropriate resource use, information accuracy, understanding customer intent, and potential misrepresentation.
- Reduce Manual Review: Because Conversation Monitoring automatically flags anomalies, assigns an impact score and provides direct links, it cuts down on the need for extensive manual review.

This information also flows into GenerativeAgent reporting, so users can see a higher level of performance over time, while using Conversation Explorer to dig deeper into flagged conversation examples.
This combined approach allows your teams to investigate, quickly find and address errors, and identify possible gaps in your own content to continuously improve the GenerativeAgent's configuration and overall performance. It helps you optimize your GenerativeAgent while building confidence in its capabilities. Conversation Monitoring can even help to identify categories of flagged conversations, highlighting areas where you have opportunities to improve.
In Summary
We’ve been intentional in our design for collecting, processing, and sharing information about GenerativeAgent actions. Conversation Explorer, especially when used hand-in-hand with Conversation Monitoring, is a practical tool for anyone who wants to truly understand, adjust, and improve their AI-driven customer interactions. By offering clear views, insights into how the GenerativeAgent thinks, and an efficient way to review flagged conversations, it helps teams maintain quality assurance at scale and continue to make the GenerativeAgent customer experience even better…and avoid the visibility challenge.
Will the real human in the loop please stand up?
Even if you weren’t around in the 1950s, chances are you’ve heard some variation of this line from the classic television game show, To Tell the Truth. In the show, celebrity panelists were introduced to three contestants, each claiming to be the same person. The real person had to tell the truth, while the impostors could lie. After a round of questioning, the panelists voted on who they believed was the real deal. The big reveal—often a surprise—delivered the show's signature drama. The format proved compelling enough to last through multiple revivals, staying on air in various forms until 2022.
Today, a new version of this guessing game is playing out in the world of generative AI for customer service. But unlike the lighthearted game show, the stakes here are much higher—especially for companies whose competitive edge depends on delivering outstanding customer experiences.
What is a HILA?
HILA, or Human-in-the-Loop Assistance, is a concept rooted in the AI/ML world that is now critical to generative AI contact center solutions. As providers race to bring AI agents into customer-facing roles, there’s an understandable focus on mitigating risk—ensuring responses are safe, compliant, and on-brand.
There are legitimate challenges to overcome for generative AI to truly revolutionize the contact center, and ensuring generative AI agent success with human-in-the-loop is part of this transformation.
There are multiple interpretations of HILA, but the common thread is clear: AI should assist humans, not the other way around. HILA is about amplifying human decision-making—not requiring it for every single task.
Let’s Meet Our Contestants
HILA #1
This version of HILA prioritizes safety by requiring human agents to approve AI-generated responses before they reach customers. While this approach ensures accuracy and compliance—particularly valuable when monitoring new or sensitive intents—it also introduces friction. The approval workflow adds delays, increases handling time, and reduces the cost-efficiency of automation, even for interactions that the AI could likely manage reliably on its own.
HILA #2
Here, AI handles interactions until it encounters a scenario it can't confidently resolve—like comparing billing cycles or navigating nuanced policy exceptions. At that point, it hands off to a human, typically with a detailed summary to streamline the transition. While helpful in certain situations, this method can degrade the customer experience by introducing a forced transfer and increasing reliance on more expensive human agents.
HILA #3
There’s something different about this HILA – it’s designed to collaborate flexibly with the AI agent, getting involved behind the scenes when necessary, unblocking the AI, then letting the AI handle the rest to successful resolution.
So, who is the "real" human-in-the-loop?
Okay, so we spoiled the surprise. But imagine flipping the script: AI leads the interaction, and humans support it—stepping in with guidance or approval when necessary, but without taking over the conversation. No hard handoffs. No context lost. Just continuous collaboration behind the scenes.
This is the GenerativeAgent® HILA paradigm: human in the loop agent. Human agents supporting AI supporting customers.
Let that sink in.
By putting humans in a supporting role—where their judgment elevates the AI rather than replaces it—we unlock scalable, safe automation while maintaining high-quality customer interactions.
How ASAPP’s GenerativeAgent leverages HILA
Here are the approaches we take with GenerativeAgent to put the HILA model into practice—blending human input with AI automation to support better outcomes.
Real-Time Human-AI Collaboration: GenerativeAgent reaches out to human agents in real time when it needs clarity, input, or permission—without handing off the conversation. This preserves continuity and keeps the interaction fully automated from the customer’s point of view.
Agent-Centered Workspace Design: A next-gen interface gives human agents relevant context, smart summaries, and intuitive tools. It captures their tacit knowledge and enables new ways of collaborating with AI—without burdening them with repetitive tasks.
Continuous Optimization: GenerativeAgent captures human decision rationale to learn and improve over time, elevating automation’s potential for your contact center without additional configuration effort.
A workspace built for this new paradigm
This new model—humans assisting AI assisting customers—requires a different kind of workspace. Not one built around handling conversations, but one designed to support judgment calls.
ASAPP developed its HILA experience through deep research with contact center agents and direct input from enterprise customers. The result: an interface designed specifically for human-in-the-loop agents, enabling fast, fluid collaboration with AI in a streamlined workflow.

Flexible, role-based human-in-the-loop design
And GenerativeAgent lets you define the role of the human in the loop—and decide how they support AI across your workflows. You stay in control of which intents AI handles, when and how it should seek human help, and what happens when something goes wrong.
Why it matters for your contact center strategy
When GenerativeAgent HILA is built into the core of your AI strategy, it changes what’s possible. Contact centers will gain practical advantages that make scaling GenerativeAgent and automation safer, smarter, and more manageable. Contact centers using GenerativeAgent HILA can:
- Expand automation safely: Define which intents the AI should handle and exactly what happens when it hits a roadblock.
- Improve AI performance: Leverage human feedback to continuously optimize GenerativeAgent’s responses and decision-making.
- Increase capacity while maintaining quality: Reduce reliance on expensive human agents by letting GenerativeAgent handle complex interactions, with intelligent human support as needed.
- Accelerate adoption and expansion: Bridge gaps in data, policy, and tooling with real-time human support—so you can roll out automation faster and at greater scale.
Common scenarios for human support
- Knowledge or system gaps: When GenerativeAgent lacks access to certain data or APIs, a human agent can fill in the blanks—or directly execute system tasks AI can't access.
- Authorization rules: For sensitive actions—like offering discounts or closing accounts—humans can provide explicit approvals based on company policy.
- Customer request: If a customer asks for a human, you can choose to transfer—or have the AI consult a human to keep the interaction moving, especially useful when queues are long.
- System-initiated triggers: When something’s unclear (e.g., API errors, ambiguous data), GenerativeAgent can ask a human for help mid-interaction—resolving the issue without starting over.
This kind of flexible, consultative support model helps organizations adopt automation faster and expand its use across more complex workflows—without compromising on control or quality.
Final thoughts
Generative AI is poised to reshape the contact center—but unlocking its full potential requires a smarter approach to human-in-the-loop collaboration. The HILA model ensures your AI is not only ready for production today, but also able to continuously improve and scale tomorrow.
As the technology evolves, so does the role of the human agent. It’s no longer about choosing between AI or people—it’s about empowering both to do what they do best. With the right human-in-the-loop strategy, companies can find the ideal balance between automation and judgment, leading to more efficient operations and better customer experiences.