Automating the Garbage Can

Despite $30 billion to $40 billion in AI investment, 95% of organizations achieve zero return, MIT study finds.

Digitized image with blocks and a camera lens tinted blue overlayed across cars on a street in a city

MIT's NANDA Project—established to help drive AI integration in enterprise settings—recently released its mid-year report. The key finding is stark: Despite $30-$40 billion in enterprise investment into generative AI, 95% of organizations are getting zero return.

From the report: "The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time."

This admission departs sharply from the GenAI industry's long-held narrative that scale—more infrastructure, more training data—is the key to success. Thus, Big Tech has funneled over $500 billion into new AI datacenters over the past two years, betting that technical expansion alone would lead to better outcomes.

Blaming the technology and the technology alone for the 95% failure rate would be a mistake. Organizational realities must also be considered.

The Garbage Can theory—a seminal framework introduced by Michael D. Cohen, James G. March, and Johan P. Olsen in the early '70s—sees organizational decision-making as a random, chaotic process where problems, solutions, and decision-makers mix like garbage in a can. Decisions are often made not through linear analysis, but when a pre-existing solution (a technology, a pet project) goes looking for a problem to solve, and they connect at the right moment.

In "organized anarchies"—such as the insurance enterprise—decisions surface more from political realities, business urgencies, happenstance, and fragmented routines than from structured analysis.

MIT NANDA's findings reveal that AI pilots frequently reflect this "garbage can" environment. Rather than deploying disciplined, contextualized programs, organizations launch generic AI tools with unclear goals, disconnected stakeholders, and insufficient governance. High failure rates stem from this context vacuum: Solutions chase problems but lack clarity on objectives or pathways for integration.

Where measurable success emerges, automation is tightly linked to specific workflow tasks—especially in finance, HR, and operations. In these areas, context and routine enable AI to deliver quantifiable savings and efficiencies, making back-office automation a financial standout.

In contrast, customer-facing applications often attract investment due to hype but rarely deliver robust returns. These projects suffer most from the garbage can effect: fragmented pilot teams, fluctuating requirements, and poorly defined goals.

The lesson is not that AI lacks potential but that organizational learning and context are prerequisites for meaningful automation. The prevailing narrative in AI casts it as a source of algorithmic precision, promising to banish organizational mess. But the garbage can will abide. The deeper challenge of AI adoption is organizational, not technological.

Deployed naively, AI becomes just another item in the garbage can—an expensive tool in search of an application, championed by some departments and ignored by others. The outcome: fragmented initiatives and wasted investment.

The best results always come when humans and AI collaborate, with humans providing context and ethical nuance, and AI bringing data-scale and pattern recognition. Ultimately, the strategic imperative is not simply to "implement AI" but to orchestrate its confluence. Consider these three recommendations:

  • Ask: "What does it improve, and by how much?" Focus on business outcomes before technology. Pick a metric and desired result, first.
  • Frame problems, not just solutions. Rather than asking "What can AI do?" define critical business problems, then determine how human-AI collaboration can address them.
  • Create deliberate choice opportunities. Design forums—cross-functional teams, innovation labs, strategy sessions—where problems and solutions connect intentionally, reducing randomness and supporting strategic adoption.

Human catalysts—those with fusion skill sets—are the drivers. Investments in training and culture change should always exceed spending on the technology itself.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.   

MORE FROM THIS AUTHOR

Read More