• AI&TFOW
  • Posts
  • What Leaders Get Wrong About AI [Newsletter #99]

What Leaders Get Wrong About AI [Newsletter #99]

The gap between hype and reality ​

Hello, AI enthusiasts from around the world.

Welcome to this week's newsletter for the AI and the Future of Work podcast.

One study sent shockwaves through the industry, claiming that nearly 95% of AI enterprise implementations fail. 

However, it’s worth reading between the lines and asking the questions every AI leader should face. What is driving so many implementations to fail? Is it AI, or is it something else? 

This week’s conversation tackles that question and points to what truly makes AI implementations work. In the end, the issue is not the technology itself, but what companies and leaders have done in the past and how they choose to adapt moving forward. 

Let's dive into this week's highlights! 🚀

🎙️New Episode With Matt Fitzpatrick, Invisible Technologies CEO

95% of AI enterprise implementations fail.

But what’s behind that number? 

That’s what one controversial study claimed, and the results shocked the industry. AI initiatives are hitting dead ends, yet 5% of them are working. 

Those cases are not random. They are structured. And no, this is not about clean desks, even if that helps. 

The real issue is messy data. You cannot build an AI implementation that adds value if the data behind it is disorganized and unreliable.

That’s why Matt Fitzpatrick emphasizes one core idea. A successful company today is built on clean data. When he wants to understand what works and what doesn’t, he speaks directly with senior executives instead of relying only on study statistics. He has followed this approach for decades. 

Matt is the CEO of Invisible Technologies, ranked the #2 fastest-growing AI company in 2024. The Invisible platform has supported models for over 80% of the world’s top AI companies, including Microsoft, AWS, and Cohere. 

He joined PeopleReign CEO Dan Turchin to discuss a key challenge. Large companies built their data systems on outdated and fragmented software. Adding AI on top of that foundation does not solve the problem. 

The shift to Gen AI raises the bar. Accuracy and correctness are now essential. Reaching that level is harder than many expect, especially for large and traditional organizations.

In this conversation, we discuss:

  • The true reason enterprise AI adoption lags behind model performance improvements, leaving organizations unable to turn technical progress into real business impact.

  • How decades of accumulated, outdated systems make reliable AI deployment at scale nearly impossible, and why data now plays a more critical role than ever.

  • Why defining “good” output in generative AI is harder than expected, and how unclear standards slow deployment across high-stakes enterprise workflows.

  • The case for redesigning workflows from scratch, and why layering AI on top of existing processes fails to deliver meaningful efficiency gains.

  • Why most AI initiatives fail due to a lack of business ownership, and how separating technology teams from operators blocks projects from reaching production.

  • How fear-driven narratives around job loss slow adoption, and why AI is more likely to shift work toward higher-value tasks than eliminate roles entirely.

Listen to the full episode to learn why Dan and Matt encourage people to look beyond the statistics any study presents, and instead focus on how successful companies implement AI. Versatility, flexibility, and an open mindset stand out as key drivers of success. 

📖 AI Fun Fact Article

A quiet and subtle change appeared in the terms of service. Now, every conversation you’ve had with Anthropic’s chatbot Claude is used to train its LLM by default, unless you choose to opt out, as Nikki Goth Itoi explains on the Stanford Human-Centered AI Institute blog. 

Anthropic is not alone. Six other leading US AI companies also feed user input into their models to improve capabilities, all in pursuit of market share. Some offer an opt-out option, but others do not. 

This means most AI users will have their data collected for training in some form. That includes sensitive information, whether through ChatGPT, Gemini, or any other frontier model. Researchers are raising concerns. These systems train on a wide range of data, including children’s data. Transparency and accountability remain limited, and data retention periods are long. 

“As a society, we need to weigh whether the potential gains in AI capabilities from training on chat data are worth the considerable loss of consumer privacy. And we need to promote innovation in privacy-preserving AI,” the Stanford team concluded. 

PeopleReign CEO Dan Turchin highlights that terms of service for all digital consumer products are intentionally opaque, to ensure “maximum flexibility.”

He has often said on the podcast that if you’re not paying, you are the product. Assume that anything you share with AI or any online service will be used in ways that may seem harmless, like ad targeting, but also in ways that could cause harm, including identity spoofing, location tracking, or even surveillance by bad actors. 

It is not practical to read and fully understand the complex legal language in terms of service. As a habit, you should opt out of data sharing for model improvement. Dan urges you to encourage your kids and loved ones to do the same. 

Consider how your data could be used against you. As consumers, we often accept the risk in exchange for free services. As you experience personalized content, stay aware of that tradeoff. 

This new era of widespread AI calls for greater awareness of the value of your data. AI only knows what you share, and we do have agency as humans.

Listener Spotlight

In this week’s mailbag, we highlight Michael in Dallas, whose favorite episode is #297 with Allison Baum Gates, Venture Capitalist at SemperVirens Venture Capital, On The Secrets To A Successful VC Career. 

🎧 You can listen to that excellent episode here!

We always enjoy hearing from listeners. Want to be featured in a future newsletter? Reply to this email and share how you listen and which episode has stayed with you the most.

Worth A Read 📰

Anthropic has once again raised concerns around safety with Mythos. The nascent AI platform has faced strong criticism, with many viewing it as a risk to broader cybersecurity systems. 

In short, the system was so powerful that even its creators expressed concern about releasing it. 

As a result, Anthropic limited access to a small group of major players, hoping early trials would uncover potential flaws. The issue is that human creativity moved faster than expected. Now, Anthropic is rushing to investigate a possible breach. 

As this article explains, human hackers using AI may have outpaced Anthropic’s safeguards. What are the implications of this breach? This detailed explainer from The Guardian breaks down what Mythos is, why it is raising concern, and what could come next. 

For years, AI transformation was treated as a leadership initiative driven from the top down. But the latest data from Microsoft suggests something different: employees are already moving faster than their organizations.

In last week’s special episode, PeopleReign CEO Dan Turchin sits down with Matt Firestone, General Manager at Microsoft leading product marketing for Microsoft 365 Copilot and agents, to unpack Microsoft's 2026 Work Trend Index and what trillions of anonymized signals across the Microsoft 365 ecosystem reveal about how AI is reshaping work in real time.

Rather than focusing on AI as a future disruption, this conversation explores a more immediate question:

How do leaders adapt when employees are already redefining the way work gets done?

Across the episode, Dan and Matt discuss frontier firms, agentic AI collaboration, and why the organizations moving fastest are often the ones building cultures of experimentation, visibility, and continuous learning.

📣 Share your Thoughts and Leave a Review!

We'd love to hear from you. Your feedback helps us improve and ensures we continue bringing valuable insights to our podcast community. 👇

Until next time, stay curious!

We want to keep you informed about the latest developments in AI. Here are a few stories from around the world worth reading:

  • Meta will begin tracking workers’ clicks, mouse movements, and keystrokes, but the goal isn’t efficiency. Here’s the end goal.

  • This Ping Pong robot used AI to make history. At least that’s what its maker thinks, but some challenge the idea. 

  • What do university students think about AI? This survey reveals fascinating results and one pressing issue in particular.

That's a Wrap for This Week!

Everyone seems to be implementing AI, but most cases are not working. This frustration adds fuel to the broader fear around AI and the future of work. 

At the same time, it raises an important question. Why is AI adoption not working? In today’s conversation, we explored an uncomfortable answer. We are trying to bring the past into today’s standards, and that approach does not hold. 

Instead, it is time to rethink everything from the ground up, starting with cleaning and organizing data. 

After all, everything works best with a clean working surface. 

If you liked this newsletter, share it with your friends!
If this email was forwarded to you, subscribe to our newsletter here to get it every week.