• AI&TFOW
  • Posts
  • When AI Falls Flat, Blame the Feed [Newsletter #67]

When AI Falls Flat, Blame the Feed [Newsletter #67]

Human stories drive real impact

Hello, AI enthusiasts from around the world.

Welcome to this week's newsletter for the AI and the Future of Work podcast.

Every tool is only as powerful as our ability to use it. Mastery takes practice, and it also requires realistic expectations about what we want the tool to deliver.

AI is no different. Large language models and other AI systems have the potential to transform our lives, but only if we learn how to master them.

This week’s episode explores how AI depends on our ideas, not the other way around, and what that means for us as users.

Let’s dive into this week’s highlights! 🚀

🎙️New Podcast Episode With Sean Williams, AutogenAI CEO. 

Electricity.
The internet.
Both were breakthroughs that changed the course of society.

Will AI be the same? Some argue that AI and large language models (LLMs) mark another foundational shift in human capability. LLMs have already taken on countless tasks and improved efficiency in our daily lives. But there’s an uncomfortable truth we can’t ignore.

No LLM will help you if you don’t use it properly. That’s where the biggest misconception lies, says Sean Williams, CEO of AutogenAI, a proposal-writing platform he founded in May 2022.

Sean draws from his early experience as a proposal writer. AI can cut weeks of drafting into hours, but it can’t replace human creativity. It needs it. Human stories are what resonate. The criticism of LLMs for producing bland, repetitive writing often misses the point. The problem is not the tools, but the source material. If there are too few ideas stretched across too many words, the output will always fall short.

PeopleReign CEO Dan Turchin sat down with Sean to explore why, in many cases, the issue isn’t with the LLMs.Sometimes, it may be that there’s just nothing to say.

This conversation covered much more, including:

  • Sean emphasizes that context is critical for LLMs to produce accurate results.

  • Why many jobs are not destroyed but instead move to new domains, as history has shown many times.

  • How AI should support values like fair competition, transparency, and accountability in democratic societies.

  • Why AI is not a threat but a tool; one that requires oversight like other transformative technologies before it.

  • Why short-term profit at society’s expense is unsustainable. New technologies only thrive when they serve the majority.

🎧 This week’s episode of AI and the Future of Work, featuring Sean Williams, inspired this issue.

Listen to the full conversation to hear more of Sean’s perspective on why AI is revolutionary yet still demands our attention—to unlock its potential while reducing the risks.

📖 AI Fun Fact Article

AI companies are starting to win the copyright fight. Blake Montgomery reports in The Telegraph that tech companies have scored several victories over their use of copyrighted text to train AI systems.

Anthropic recently won a case where a U.S. judge ruled that training its AI on books without permission did not breach copyright law. The judge compared it to "a reader aspiring to be a writer." In a surprising move to comply with copyright, Anthropic purchased and destroyed 7 million physical books.

The next day, U.S. District Judge Vince Chhabria in San Francisco said authors had not provided enough evidence that Meta’s AI would cause “market dilution” by flooding the market with similar work. Pending lawsuits involving Disney, NBCUniversal, Midjourney, Sony, and others will test how these rulings apply to music, images, and movies.

Fuente: The Guardian

To PeopleReign CEO Dan Turchin, it all feels very Ray Bradbury. He encourages everyone to revisit Fahrenheit 451. His point is simple: creators deserve to be compensated for their work. If an AI-generated piece is derived from copyrighted material, the original creator should be paid. AI-generated synopses and derivative works will inevitably affect sales.

There is already some progress toward this goal. Cloudflare, for example, has introduced a service that lets content owners decide if AI companies must pay to index their work. It’s a start.

Just as Napster eventually learned in 2001, technology should help creators reach wider audiences and earn income from their work. The same principle of labor in exchange for capital applies in every industry. Kudos to Cloudflare and other innovators working to protect human creativity regardless of what the courts decide.

Listener Spotlight

Tanisha is a librarian in Tampa, Florida. She listens to the podcast while re-stocking shelves.

Her favorite episode is #325 with Dr. Brandeis Marshall about unmasking hidden bias in AI. You can listen to that excellent episode here.

We always love hearing from you. Want to be featured in an upcoming episode or newsletter? Just comment and let us know how you listen and which episode has stayed with you the most.

Worth A Read📚

AI-generated videos known as AI slop have become a YouTuber’s dream content. They rack up hundreds of thousands of views and are extremely easy to make.

The problem? People hate them. They generate so much dislike that YouTube is making significant efforts to stop them from profiting, as you can read here.

Fuente: Pixabay

These videos are the newest unwanted consequence of AI’s revolution. They can be misleading, confusing, or downright senseless, and they’re flooding social media feeds.

Can we do something to stop them? The answer isn’t so straightforward.

Learn more about the fascinating case of AI slop here.

Bonus Time

What can be predicted is better left to machines 💻 . What requires judgment or empathy is better left to humans.🧠 

PeopleReign CEO Dan Turchin recently talked with Open Service Community OSC about the ways in which AI-based tools can and should adapt to the way humans work, and the importance of practicing Responsible AI.

Watch Here:

📣 Share your Thoughts and Leave a Review!

We want to hear what you have to say! Your feedback helps us improve and ensures we continue to deliver valuable insights to our podcast listeners. 👇

👋 Until Next Time: Stay Curious

We want to keep you informed about the latest in AI. Here are a few stories from around the world worth reading:

  • There’s growing concern about AI-enabled cheating, but this article argues that the fears may be misplaced.

  • Food waste remains a critical problem. See how one of the world’s largest supermarket chains is using AI to reduce it.

  • Healthcare systems are redefining the roles of nurses and physicians, framing them as “healthcare quarterbacks.” Here’s how.

That's a Wrap for This Week!

This week’s conversation reminds us that AI tools are only as useful as the direction humans give them. They can’t solve problems if they don’t know what those problems are.

That’s why our stories and ideas are more valuable than ever.

We hope today’s discussion inspires you to rethink how you use these tools, both now and in the future.

Until next time, keep questioning, keep innovating, and we’ll see you in the future of work 🎙️

If you liked this newsletter, share it with your friends!
If this email was forwarded to you, subscribe to our Linkedin’s newsletter here to get the newsletter every week.