Another week, another study telling us what we already suspected. MIT’s NANDA initiative just dropped their bombshell: The GenAI Divide: State of AI in Business 2025 shows that 95% of enterprise GenAI pilots are delivering zero financial impact. That’s not a typo. After all the hype, all the investment, all the promises about transforming business, nineteen out of twenty initiatives are complete washouts.
The market freaked out, predictably. Tech stocks tanked. Investors started asking hard questions about whether we’re in another bubble. The reality is that this isn’t a story about AI being broken. This is organizational failure disguised as technology failure.
I’ve spent years helping teams build adaptive systems that actually work. The patterns MIT uncovered aren’t surprising if you understand how most companies approach technology adoption. They’re making the same mistakes they’ve always made with new technology, just with shinier tools and bigger budgets.
What the numbers actually tell us
MIT analyzed 300 AI deployments across 150 companies and surveyed 350 employees. The data is stark but not shocking. Only 5% achieved rapid revenue acceleration. The rest? Complete stalls with no measurable P&L impact.
The researchers found a massive “learning gap” between what these tools can do and how organizations use them. Executives blame regulation or model quality, but that’s not the problem. The problem is treating AI like software when it behaves more like a capability that needs to be developed.
Companies are throwing money at the wrong things too. More than half of GenAI budgets go to sales and marketing, but the actual ROI shows up in back-office automation. They’re optimizing for visibility instead of value.
The success rates tell the real story. Companies buying specialized tools from vendors succeed 67% of the time. Internal builds? Only 33%. That’s a massive gap that says everything about organizational capability versus vendor expertise.
GenAI is exposing all our neglected fundamentals
What’s fascinating about GenAI failures is how they mirror problems we’ve seen everywhere else. At the engineering level, teams are running into “vibe coding” disasters because they’re skipping the basics we’ve known for decades: small increments, test-driven development, fast feedback loops, clear requirements.
I see engineers trying to generate massive code blocks with AI, then spending days debugging because they didn’t start small. They’re treating AI output like gospel instead of writing tests first to clarify what they actually want. They’re bypassing code review and continuous integration because “the AI wrote it.” The same practices that make human coding sustainable make AI-assisted coding work.
At the organizational level, the patterns are identical. Companies are trying to implement enterprise-wide AI transformations without the foundational capabilities that make any change management work: leadership that provides context instead of control, autonomous teams that can adapt quickly, empowered frontline workers who understand the problems, fast feedback loops that enable learning.
GenAI isn’t creating new problems. It’s amplifying existing organizational dysfunction and making it impossible to ignore.
Why enterprises are getting this completely wrong
The MIT study reveals the usual suspects behind failed technology adoption, just magnified by AI’s complexity.
They’re starting in the wrong places. Most pilots target flashy, customer-facing functions because executives can see them. But AI delivers the biggest wins in messy back-office processes where humans spend time on repetitive work. Companies chase the demo-able stuff instead of the impactful stuff.
They’re not bringing people along. You can’t just drop AI tools into existing workflows and expect transformation. People need to understand how to work with these systems, when to trust them, when to override them. The study found massive “shadow AI” usage where employees use ChatGPT because the official tools don’t work. That tells you everything about change management.
They’re building when they should be buying. The 67% versus 33% success rate gap is brutal. Companies with no AI expertise are trying to build specialized systems from scratch. It’s like deciding to manufacture your own laptops instead of buying Dell. The ego economics don’t work.
They’re treating pilots like proof-of-concepts instead of learning systems. Most pilots are static demonstrations designed to prove a point, not dynamic experiments designed to learn what works. Real pilots should evolve based on user feedback and business outcomes. Instead, companies launch something, measure it once, then declare success or failure.
This is the organizational equivalent of deploying code without testing it. You get one shot to see if it works, no feedback loops, no iteration cycles, no ability to adapt based on what you discover.
They lack the organizational structure to support adaptive technology. AI systems need continuous feedback loops, rapid iteration cycles, and cross-functional collaboration. Most enterprises are designed for predictable, hierarchical workflows. When you drop adaptive technology into rigid systems, the rigid systems win.
This isn’t about technology sophistication. It’s about organizational design. The companies succeeding with AI have already figured out how to build learning systems, empower frontline workers, and adapt quickly to changing conditions.
What the successful 5% do differently
The companies making AI work aren’t magic. They’re just organized differently and they approach technology adoption with completely different principles. They’ve built the adaptive capabilities that make any new technology adoption work.
They start with real business problems, not technology solutions. Successful pilots begin with specific pain points: “Customer service is drowning in repetitive tickets” or “Our procurement process burns six hours per purchase order.” They use AI to solve those problems, not to showcase AI capabilities.
At Split, when we started building our Agent Switch, we took exactly this approach. We didn’t set out to add AI to our product because it was trendy. We had a real problem: customers struggled to understand how to use feature flags effectively and needed help interpreting their experiment data.
We started small with a documentation assistant, A/B tested different approaches, and piloted with customers who had this pain point. The key was treating it like any product development cycle - the same practices that make good software development work. We validated the problem, learned from user behavior, and iterated based on feedback.
The AI initially just answered questions about how to do things. Then it learned to analyze metrics and help customers make sense of their data. Eventually it grew into an assistant that could take actions on behalf of users. Each step was driven by customer value, not technical capability. Small increments, fast feedback, continuous integration of learnings.
They build learning systems, not static tools. The winners create feedback loops where AI systems improve based on actual usage. Their tools remember context, adapt to user preferences, and get better over time. Generic ChatGPT implementations can’t do this. Purpose-built systems can.
At Tinybird, we took a different but equally effective approach. Instead of starting with customer-facing features, we got everyone in the company using GenAI tools first. Engineering, Product, Design, Sales, Marketing, Analytics, Customer Success, Support - everyone learned how these systems actually work and where they add value versus where they don’t.
We created weekly show-and-tells and an internal channel for sharing discoveries and best practices. This accelerated our learning loops dramatically - the same rapid feedback cycles that make agile development work. Teams started identifying pilots across different functional areas, each reporting varying levels of success in reducing toil and operational overhead.
More importantly, our product and engineering teams built the expertise necessary to make Tinybird agent-first, with GenAI flows built into the core product experience. We didn’t just bolt AI onto existing features. We rethought the user experience from the ground up, which is what you do when you have the organizational capability to adapt quickly.
They empower the people closest to the work. Instead of central AI labs, successful companies let line managers and frontline workers drive adoption. These people understand the problems intimately and can spot when solutions actually work. The MIT study specifically calls this out as a critical success factor.
This is basic adaptive organization design. Context flows down, decisions happen at the edge, and the people doing the work have the authority to change how the work gets done.
They treat AI adoption like product development. Successful pilots use rapid iteration cycles, A/B testing, user feedback, and continuous improvement. They ship early versions, learn from usage patterns, and evolve based on what they discover. This is basic product management applied to internal tools.
They partner with experts instead of building from scratch. The 67% success rate for vendor partnerships isn’t accidental. Specialized companies have solved integration challenges, built learning capabilities, and refined their tools across multiple deployments. Internal teams are learning on your dime.
The pattern is clear: companies succeeding with AI have already built adaptive organizational capabilities. They know how to experiment, learn, and scale new approaches. AI becomes another tool in their adaptation toolkit, not a magic solution that bypasses good organizational practices.
The fundamentals we can’t skip
GenAI failures are teaching us something important: you can’t skip the fundamentals and expect new technology to save you. Whether you’re a developer trying to use AI coding assistants or an enterprise trying to transform customer service, the same principles apply.
At the engineering level: Small increments beat big-bang deployments. Test-driven development clarifies requirements before you write (or generate) code. Fast feedback loops catch problems early. Code review works whether humans or AI wrote the code. Continuous integration prevents integration disasters. These practices aren’t optional just because AI is involved.
At the organizational level: Leadership through context, not control, enables teams to adapt quickly. Autonomous teams can respond to what they learn without waiting for permission. Empowered frontline workers understand problems better than distant executives. Fast feedback loops enable rapid iteration. Starting small and focused builds capability before scaling.
The organizations succeeding with AI already do these things well. They don’t need to learn new practices for AI - they apply existing adaptive practices to new technology.
How to join the successful 5%
If you want to avoid the 95% failure rate, start with organizational design, not technology selection.
Build your adaptive muscle first. Before launching AI pilots, make sure your organization can actually learn from experiments. Set up rapid feedback loops, empower frontline workers to make decisions, and create systems for capturing and sharing what you discover.
Start small and focused. Pick one specific process that wastes human time and intelligence. Don’t try to transform everything at once. Build one successful use case, learn from it, then expand based on what works. This is the organizational equivalent of starting with a failing test and making it pass.
Focus on back-office wins before customer-facing features. The MIT data is clear: ROI shows up in operations, procurement, and administrative processes. These areas have measurable efficiency gains and fewer integration complexities than customer-facing systems.
Buy expertise, don’t build it. Unless AI is your core business, partner with vendors who specialize in your use case. The 2X success rate difference is worth the ego hit of not building everything internally.
Treat pilots like product launches. Use proper product management disciplines: clear success metrics, user feedback loops, iterative improvement, and willingness to pivot based on what you learn. Most pilots fail because they’re structured like demonstrations, not experiments.
Empower the right people to drive adoption. Skip the central AI committee. Give budget and authority to the managers and workers who actually understand the problems you’re trying to solve. They’ll build better solutions and drive faster adoption. This means building shared consciousness across teams and creating transparency that enables distributed decision-making.
The companies making AI work aren’t doing anything revolutionary. They’re applying sound organizational principles to new technology. They start with problems, not solutions. They learn from users, not executives. They adapt based on results, not plans.
The real opportunity
The 95% failure rate isn’t a condemnation of AI. It’s a massive competitive opportunity for organizations that can adopt new technology effectively.
While your competitors burn through AI budgets on flashy demos that deliver no value, you can build actual capabilities that create competitive advantages. While they’re stuck in pilot paralysis, you can be shipping systems that make your teams more effective every week.
The technology works. The successful 5% prove that. The question is whether you’ll build the organizational capabilities needed to join them, or keep doing AI adoption the way everyone else is failing.
GenAI isn’t just another technology wave. It’s a forcing function that exposes whether your organization can actually adapt to change. The companies that succeed will be those that use AI adoption as an excuse to finally build the adaptive capabilities they should have had all along.
Start with one small problem. Build a learning system around solving it. Empower the people closest to the work. Partner with experts who’ve solved this before. Measure what matters. Iterate based on what you discover.
The 95% aren’t failing because AI doesn’t work. They’re failing because they’re not set up to make any new technology work. Fix that, and AI becomes just another tool for building adaptive advantage.