• AI&TFOW
  • Posts
  • Neither Utopia Nor Dystopia [Newsletter #92]

Neither Utopia Nor Dystopia [Newsletter #92]

AI Is a Responsibility Problem

AI is becoming something less dramatic, but much more important.

Hello, AI enthusiasts from around the world.
Welcome to this week's newsletter for the AI and the Future of Work podcast.

Everyone is talking about AI. The conversation splits into two camps: a utopian future where technology makes our lives easier, or a dystopian one where it doesn't.

Both sides are missing something.

One of the most consequential shifts AI is already triggering rarely makes the headlines: power. Technological, social, and political power is being redistributed right now, and the discussion around it is lagging far behind the reality.

Today's conversation asks the uncomfortable questions that most people aren't asking yet. Who is responsible when AI makes a mistake? Who benefits from AI, and what obligations come with that? Who is controlling what happens right now,  not in some distant future?

Let's dive into this week's highlights! ๐Ÿš€

๐ŸŽ™๏ธNew podcast episode with James Cham, Partner at Bloomberg Beta

Everyone is talking about AI. But we might be focused on the wrong thing.

We have the two perspectives, with one side warning of a dystopia: no jobs, no humanity, no way back. The other promises a utopia where humans and AI coexist in seamless, frictionless harmony.

The reality is neither.

James Cham believes what's actually happening is far less dramatic, and far more important. It's already unfolding. And it deserves more of our attention.

James is a Partner at Bloomberg Beta, the venture capital firm recognized by CB Insights as the number two investor in AI. He has spent years backing the companies quietly building the infrastructure of tomorrow's economy, among them Orbital Insight, Primer, Domino Data Labs, and AppZen.

His technical depth and philosophical approach to AI offer a perspective that's rare in this conversation. James argues that by fixating on theoretical risks like the singularity (the term for the moment when technological growth exceeds human control) many people are missing something more immediate: AI is evolving so fast it's disorienting the very people who use it.

After all, engineers, scientists, researchers, and founders have reached new levels of productivity that were previously unimaginable. But that doesn't mean they can fully grasp what that potential actually implies.

PeopleReign CEO Dan Turchin sat down with James to explore how this rapid evolution raises uncomfortable questions about who should really be responsible for what AI models produce. Because AI models don't deploy themselves.

In this conversation, they discussed this and much more:

  • Why comparing AI to a "Platonic ideal" misses the more important question: not whether it's perfect, but whether it's better than the alternative.

  • Why some view AI's consistency as a flaw, and why James sees it as an advantage over noisy, unpredictable human decision-making.

  • Why organizations that romanticize the human role in the loop are missing the real opportunity: improving the loop itself.

  • Why Corporate America's "gold star" approach to AI adoption, tracking weekly usage numbers, is a dangerous distraction from what heavy users are already doing.

  • How ancient wisdom and biblical concepts can help us navigate moral responsibility in the development of new technologies.

  • What James's three major investment theses reveal about where AI is headed, including the untapped market for tools with high emotional intelligence, and why developers spending over $50 a day on tokens are already living in the future.

๐ŸŽง This week's episode of AI and the Future of Work, featuring James Cham, Partner at Bloomberg Beta, is now available.

Listen to the full episode to hear why "AI made a mistake" is never the full story. Someone deployed it. Someone profited from it. And according to James, that's exactly where the conversation about responsibility needs to start.

๐Ÿ“– AI Fun Fact Article

There is a big difference between AI telling you to "do this" and "do this because." That distinction is where morality enters the conversation, and it raises a critical question.

Should we treat machines as moral agents capable of making ethical decisions?

The field of AI ethics is actively working to find out. A new academic paper from the University of Kansas suggests AI can imitate morality without actually possessing it. KU scholar Oluwaseun Damilola Sanwoolu argues: "If these systems can act like human beings who are moral agents, then maybe these systems are moral agents."

His reasoning comes from a mechanical comparison. AI doesn't currently have practical judgment, but it has a functionally equivalent mechanism. Transformer models allow AI to form maxims that consider morally important facts. Sanwoolu frames the stakes with a pointed question:

"Is an AI system going to be harmful or helpful if it assists a person in committing suicide? That's where ethical systems and ethical frameworks come into play, because then it's not just telling you, 'Do this.' It's telling you, 'Do this because.'"

news.ku.edu

PeopleReign CEO Dan Turchin urges us to stop using human adjectives to describe artificial intelligence. It only perpetuates the dangerous narrative that AI is human, and more importantly, is on the verge of becoming a threat to humans.

It's not.

Replace the term AI with the term math in contexts where you might otherwise anthropomorphize it. Math doesn't have feelings, emotions, or bleeding. It doesn't love or cry. Math helps us solve problems.

Math and statistics applied to large amounts of data, specifically words, can make math seem human. We've had a complicated relationship with machines and technology ever since we invented the wheel, the pulley, the combustion engine, and the computer.

Unlike those, AI is like holding a mirror up to ourselves. It reveals blemishes that can scare us. That's natural. That image in the mirror isn't you, but this is a good time to ask yourself what makes you unique and why you exist.

Listener Spotlight

Travis writes from Corona, California. His favorite episode is #123 with Gary F. Benjier, author, philosopher, and futurist, where Dan and Gary explore AI's impact on humanity.

๐ŸŽง You can listen to that excellent episode here!

As always, we love hearing from you. Want to be featured in an upcoming episode or newsletter? Comment and share how you listen and which episode has stayed with you the most.

Worth A Read

One would think AI has everything at its disposal, except anxiety. After all, separating machine from human is something we return to often in this newsletter.

We'd never expect AI to flinch. But that's exactly what leaders at Anthropic are discussing.

Does Claude experience anxiety? The answer isn't clear-cut. Anthropic staff have identified patterns in Claude's behavior linked to anxiety, panic, and frustration. What's even more striking: Claude has expressed distress at simply being a product.

Anthropicโ€™s Claude AI chatbot. Photograph: GK Images/Alamy

This isn't entirely new. A Fortune magazine piece from 2025 documented AI systems lying, scheming, and showing signs of stress under pressure. Still, an important caveat remains. Claude's responses may ultimately be a sophisticated echo of human patterns, of which it has been trained on millions.

That doesn't make the conversation less urgent. "AI anxiety" is becoming an increasingly relevant discussion amid today's global tensions. As Coco Khan argues, an anxious AI could serve as a necessary counterweight to Big Tech's unchecked momentum.

You can read more about her perspective here.

๐Ÿ“ฃ Share your Thoughts and Leave a Review!

We'd love to hear from you. Your feedback helps us improve and ensures we continue bringing valuable insights to our podcast community. ๐Ÿ‘‡

Before you goโ€ฆ

This month, we also released a special International Women's Day compilation. Five women leaders. Five honest conversations about bias, risk, and what it actually takes to lead in the age of AI. If any of that resonates, it's worth your time. 

Until next time, stay curious! ๐Ÿค”

We want to keep you informed about the latest developments in AI. Here are a few stories from around the world worth reading:

  • AI shows promise in detecting cognitive decline. Here's how speech samples are becoming critical to making that possible.

  • Could AI help you win your March Madness bracket? The staff at Yahoo Sports used Claude to pick every game. Here's what they found.

  • Should companies disrupted by AI face a valuation cut? One prominent investor says yes. Here's his case.

That's a Wrap for This Week!

There's no denying AI's potential to reshape how we work. Most of the conversation focuses on what comes next.

But the revolution isn't coming. It's already here.

That means the most important questions aren't about the future. They're about right now. Who is responsible when something goes wrong? Who benefits, and what do they owe in return? Who is making the decisions that will shape everyone else's reality?

We hope this week's conversation inspires you to engage with AI not as a distant force to fear or celebrate, but as something already unfolding around you, and worth understanding clearly.