Mackenzie Smith

Mackenzie Smith is the VP & Head of Partnerships and Business Operations, responsible for product onboarding, ongoing services, technical support, technical project management, and partnership relations. Prior to ASAPP, she helped clients build proprietary advertising technology platforms at IPONWEB and lead client engagements for digital advertising at Merkle. Throughout her career, she has been focused on partnering with clients, across industry sectors, and utilizing technology and analytics to drive business value. She is a graduate of Harvard University.
The generative AI divide—and how to bridge it
This is part of a blog post series focusing on the AI usage gaps identified within the MIT NANDA report, The GenAI Divide: State of AI in Business 2025.
According to a recent MIT NANDA report, by mid-2025, over 80% of enterprises had explored or piloted some form of generative AI. Global spending on these initiatives reached as high as $40 billion. Despite the strong interest, commitment, and expense, the study revealed an astonishing truth—in the vast majority of cases, enterprises realized little or no financial return. A full 95% of the pilots failed to yield a measurable impact on profit & loss (P&L).
The disconnect between the majority of AI pilots that failed and the meager 5% that succeeded was so stark that the report authors dubbed it the GenAI Divide—a gargantuan chasm in the business transformation that the AI pilots targeted, promised, and only sometimes delivered.
The causes of the GenAI Divide are varied across industries and individual companies, but there are some commonalities. You might suspect that a lingering reticence to embrace AI still permeates the culture at a fair number of the companies whose AI pilots failed. But the MIT report found something much more interesting.
AI everywhere, but no measurable impact
The report noted a surprising dichotomy in AI usage. At more than 90% of the companies surveyed, employees are already using generative AI to automate parts of their jobs. The catch is that the majority are using consumer tools with personal accounts rather than IT-blessed enterprise solutions. Only 40% of companies said they’d purchased an official LLM subscription.
Clearly, many employees embrace the time savings and increased productivity that AI tools offer—enough so that they find ways to incorporate consumer AI tools into their workflows on a regular basis. Companies are awash in AI, even if they haven’t purchased an enterprise tool.
Uncoordinated, individualized AI usage rarely translates into a clear, measurable impact on a company’s bottom line. Employees squeeze out a bit more efficiency, but the impact remains at the individual employee level.
Why too many AI pilots fail to deliver returns
Most official AI pilots also fail to deliver returns on the investment. Sixty percent of the organizations surveyed had evaluated task-specific generative AI solutions, but only 5% made it past the pilot phase. The rest saw no measurable value.
For a technology with so much power, that’s a disappointing, even baffling result. Why are so many AI pilots failing?
Vendors who overpromise and underdeliver, enterprises tied to brittle processes and workflows, and misguided AI investment priorities all play a role. But according to the report, the single biggest reason for the failures was clear—the AI systems in these pilots don’t learn or adapt based on changing context or feedback, and they don’t improve over time.
How common hurdles translate into failed AI pilots
The lack of adaptability in many generative AI solutions is multifaceted. It’s not just that they don’t learn. Far too often, the technology cannot be effectively integrated, customized, or scaled.
Misaligned expectations
The hype around generative AI has been off the charts, and access to generative AI applications has been democratized. Spinning up a new agent is surprisingly simple, and rapid prototypes perform well on the happy path. Combine fast prototyping with the fact that employees are already using similar systems individually, and you get unrealistic expectations about both what a generative AI agent can do and how easy it will be to build and maintain.
Bottom Line: There’s a fundamental difference between speed to prototype and speed to production and ROI.
Integration challenges
To take action, generative AI solutions must have access to other systems in your organization. In the contact center, that means they need access to your CX infrastructure, CRM or other system of record for customer data, your knowledge base, back-office workflows, and other internal systems. And chances are, your APIs weren’t created with an AI agent in mind. It all adds up to more complexity and a greater need for development resources than a limited prototype could ever uncover.
Bottom Line: There is no chance to scale impact without thoughtfully designed integration protocols and AI that can traverse your existing systems.
Inadequate customization
Generic AI won’t be able to navigate complex workflows to solve specific problems for your business. For enterprise complexity, you’ll need a more advanced solution with a sophisticated memory architecture, the ability to learn from feedback and context, and extensive tools to observe and optimize the AI.
Bottom Line: No two enterprises are alike; your AI solution must accommodate that truth.
Lack of scalability
Scaling up often reveals variables that weren’t considered in the pilot phase. With an autonomous AI agent for customer service, that might mean variations in the way customers communicate, details in customer issues that require deviations from the typical workflow, or the need to navigate different policies based on different product lines, customer loyalty status, or geographic location.
Bottom Line: Embrace iteration and optimization methodologies; this is not a one-and-done.
How the 5% that succeed are different
Of course, as we know at ASAPP, AI pilots can succeed. We’ve seen firsthand what works—and what sets that savvy 5% on the path from pilot to production and expansion.
Here’s what enterprises with successful AI deployments do right.
Choose the right partners
The MIT report found that two-thirds of the successful AI pilots occurred in organizations that worked with a solution provider, and in some cases, an implementation partner, rather than building their own solution. Building a prototype is easy. Building a production-ready solution requires much more expertise. The right partners dramatically increase your chances of realizing a meaningful return on your investment.
Aim for measurable business outcomes
The momentum behind AI is pushing some business leaders to adopt AI solutions, even without a clear strategy for how they will address specific business challenges. General goals won’t yield meaningful results. Enterprises that see a real P&L impact start with specific targets and measurable goals far beyond typical AI benchmarks. They aim for concrete business results that can be measured, optimized for, and tied to value.
Define the right use cases
So much depends on how you define your use cases. Define them too broadly, and the AI will fail too often. Define them too narrowly, and it won’t make much impact. And without a strong, consultative approach to the business need and the technology, you’ll miss the mark. With an AI agent for customer service, this means choosing use cases that are not already well contained with existing automation, that the AI can handle on its own or with minimal human support, and that will deliver substantial value.
Monitor, measure, and optimize
The work of getting value from an AI solution deployment is ongoing. This is not a set-it-and-forget-it endeavor. Maximizing the returns with your AI solution requires consistent monitoring to measure impact, diagnose issues, and optimize as conditions change. To do that, you’ll need visibility into what the AI is doing and why, along with a suite of tools to modify and optimize its performance.
Bridging the GenAI divide
The headlines generated by the MIT report have been alarming. But underneath the reported 95% failure rate for AI pilots lies the real story, and it’s a hopeful one. The enterprises that are recording measurable P&L impact with AI aren’t flukes. They’re approaching their AI deployments differently, and that proves the point that with the right strategy, AI delivers significant benefits.
The success stories create a clear roadmap for others to follow. And they demonstrate the benefits of successful AI deployments. The 5% who’ve succeeded have already gained a competitive advantage. The only question is whether you’re ready to follow their lead.