Why 80% of AI Projects Fail—and How to Make Yours Work

Profile Picture of Scalable Path
Scalable Path
Editorial Team
Listen to this episode of Commit & Push
apple-podcastsspotifyyoutube
a photo of Dan Saffer with text, "AI is Failing -- And it's Our Fault"

Most AI projects fail.

That’s not clickbait—it’s a well-documented reality, with failure rates as high as 90% according to research from Carnegie Mellon, Gartner, and the Wall Street Journal.

Dan Saffer, Associate Director of Outreach at Carnegie Mellon’s Human-Computer Interaction Institute, says the reasons are rarely technical. They’re human: bad data, poor UX, vague goals, and inflated expectations.

In this post, we unpack the real reasons AI projects fall apart—and what successful teams do to avoid the same fate.

Why So Many AI Projects Fail (and Keep Failing)

Dan Saffer has spent years translating academic research into real-world frameworks for building better AI. According to the data—and a whole lot of postmortems—these are the five biggest reasons AI projects tank.

1. Garbage In, Garbage Out

AI runs on data. But if your training sets are mislabeled, incomplete, or irrelevant, the model doesn’t stand a chance. Clean, accessible data isn’t optional—it’s table stakes.

2. Models That Can’t Cut It

Even with solid data, your model still needs to perform. In high-risk domains like medicine or autonomous driving, “good enough” isn’t good enough. If the model can’t meet the bar, don’t ship it.

3. Ethical Landmines and Bias Bombs

When systems misidentify people or spit out offensive content, the fallout is immediate and brutal. These failures usually come from biased training data or lazy testing—both avoidable.

4. No User Value, No ROI

Just because you can automate something doesn’t mean you should. Projects often launch with no clear benefit for the user or business. That’s not innovation—it’s waste.

5. Solving the Wrong Problem

The most common—and most fatal—mistake. Teams chase moonshots or hype-driven ideas instead of tackling problems that are feasible, valuable, and grounded in actual user needs.

Why AI UX Is Still a Mess

Slapping a chatbot on your app doesn’t make it smart—it makes it confusing. Yet that’s the default UX pattern for a lot of AI tools today. 

The problem is that users don’t know what the system can do, what it can’t, or how to get what they want. This typically leads to frustration, dead ends, and abandoned features.

Enter Adaptive Interfaces

Instead of bolting AI onto products, teams are quietly weaving it in. Adaptive UIs tailor the experience based on your behavior: surfacing the features you use most, hiding what you ignore, and anticipating your next move.

Examples of this “tailoring” include: 

  • predictive text that actually learns from you, 
  • notifications that show up when they’re helpful (not just timely), and 
  • workflows that adapt to how you use the product. 

It’s subtle, smart, and incredibly powerful. Unused clutter fades away. The product quietly adjusts to your patterns, making the experience faster and more intuitive without forcing you to relearn everything.

The Explainability Illusion

The concept of explainability in AI  products and machine learning models is much discussed.  But most complex models—especially LLMs—don’t work in ways that can be easily broken down or understood. 

When we try to explain them, users often over-trust the output—even when it’s wrong. In some cases, offering no explanation may be more responsible than pretending we fully understand what’s happening under the hood.

How to Build an AI Project That Actually Succeeds

If you want your AI project to survive past the prototype stage, you have to avoid the usual traps. 

Solve the Right Problem

Focus on use cases where “good enough” is good enough. Think recommendation engines, not cancer detection. If the stakes are low and the system can tolerate some error, your odds of success go way up.

Embed AI into Real Workflows

AI shouldn’t be a standalone feature. The best implementations are quiet and useful—like autocomplete, personalized sorting, or predictive suggestions. Build utility, not hype.

Prototype Like You Mean It

A flashy demo is worthless if it can’t survive first contact with real data or users. Test early, test often, and design your prototype to fail fast—before it costs you.

Design for Adaptivity, Not Surprise

Let the system learn from behavior, but don’t take control away from users. Adaptive UIs should feel intuitive, not unsettling. Smart defaults, not spooky automation.

Keep Humans in the Loop Where It Counts

Especially in high-risk domains, AI should augment human decision-making—not replace it. Smart systems adapt. Great ones still give users a way out. Learn from user behavior, but always make the system feel predictable—not creepy.

Final Thoughts from Dan

If there’s one takeaway from Dan Saffer’s perspective, it’s this: AI success isn’t about bigger models or flashier features—it’s about solving the right problem, with the right data, in the right way. The teams that win aren’t the ones chasing the hype; they’re the ones asking hard questions early, designing for real users, and building systems that work even when things aren’t perfect.

About Dan Saffer

Dan Saffer is the Associate Director of Outreach at Carnegie Mellon’s Human Computer Interaction Institute and a veteran design leader with over twenty years of experience shaping products at companies like Twitter, Flipboard, and various robotics ventures. As the author of four books on UX and product design, including the influential “Microinteractions,” Dan brings deep expertise in user experience design and emerging technologies.

Originally published on Jun 10, 2025Last updated on Jun 10, 2025

Looking to hire?

The Scalable Path Newsletter

Join thousands of subscribers and receive original articles about building awesome digital products