• AI&TFOW
  • Posts
  • Will AI Agents Redefine Automation? [Newsletter #60]

Will AI Agents Redefine Automation? [Newsletter #60]

Balancing Autonomy and Guardrails

Hello, AI enthusiasts from around the world.

Welcome to this week’s newsletter for the AI and the Future of Work podcast.

If AI is going to be a part of everything we do, we need to address the big question: what could go wrong?

It may not be the most exciting topic, but discussing how to keep AI in check is essential. The good news is that responsible guardrails are possible.

Today’s conversation reminds us that feeling uncertain about AI is normal. The way to ease that uncertainty is by addressing it directly.

Let’s dive into this week’s highlights! 🚀

🎙️ New Podcast Episode With Bhaskar Roy, Chief Officer of AI at Workato

We can’t talk about autonomy and AI agents without also asking what could go wrong.

These agents operate inside workflows that manage critical data. So how do we make sure the right guardrails are in place?

Bhaskar Roy believes the key is to reduce complexity and focus on skills. What does that mean in practice?

The less constrained an AI system is, the more likely it is to hallucinate. The solution, according to Bhaskar, is to train agents with deterministic skills and limit what they can do.

Picture a clear conversation: “Agent, you have skills A, B, C, and D. That’s it.”

But that’s not the full answer. Bhaskar emphasizes that responsibility doesn’t stop once an agent is deployed.

Even if hallucinations decrease, AI still acts with a degree of autonomy. It’s up to us to monitor how it evolves, refine the guardrails, and constantly reassess which human skills are needed now and in the future.

This approach is central to Bhaskar’s role as Chief Officer of AI at Workato, a company where AI is embedded into everything they do.

Workato is a leader in workflow automation, serving over 11,000 customers with the ability to automate 40,000 tasks per second.

Bhaskar’s background includes co-founding Qik, acquired by Skype for $150 million and later by Microsoft for $8.5 billion. He also held leadership roles at Playphone and beyond.

So what does this kind of scale mean for employees, companies, and the future of work?

That’s exactly what Bhaskar unpacks in his conversation with PeopleReign CEO Dan Turchin. Bhaskar sat down with Dan to talk about how we should approach AI risk and how to bring those conversations into the workplace.

The episode covers this and more:

  • AI can improve lives, not just profits, by supporting nonprofits and helping them take a leap forward with tech.

  • Automation is evolving fast; it’s no longer just a support tool. It’s now powering core business functions like onboarding and order processing.

  • AI can be intimidating, and that fear is valid. But it also creates opportunities to rethink roles and build new skills.

  • Customer engagement still matters. We can’t expect AI agents to handle it all.

🎧 This week’s episode of AI and the Future of Work, featuring Bhaskar Roy, inspired this issue.

Listen to the full episode to hear more about Bhaskar’s practical approach to AI leadership and how companies can build a smarter, more human-centered future of work.

📖 AI Fun Fact

Reddit has been at the center of the conversation since the generative AI boom began in late 2022. As Ashley Capoot writes at CNBC.com, the platform is a prime target thanks to its vast archive of user-generated content, which is valuable for training large AI models.

But Reddit is now taking legal action. The platform is suing Anthropic for breach of contract and unfair competition, alleging that Anthropic used Reddit’s content without permission.

According to the lawsuit, Anthropic trained its models on Reddit user data without obtaining consent. Reddit claims this unauthorized use has harmed the company.

The filing states that "For its part, despite what its marketing material says, Anthropic does not care about Reddit's rules or users: it believes it is entitled to take whatever content it wants and use that content however it desires, with impunity."

At the same time, Reddit announced a partnership with OpenAI in May and has a similar agreement with Google. Read more about those deals here.

Reddit’s goal with the lawsuit is to seek damages and compel Anthropic to honor its legal and contractual responsibilities.

Source: Spencer Platt | Getty Images

Here's PeopleReign CEO Dan Turchin's commentary:

The broader issue here is what constitutes fair use, which we thought had been fully adjudicated in the courts a decade back in the context of web search. AI has spawned a whole new set of questions and debates about what AI model vendors are allowed to use with and without paying licensing fees.

Reddit sells its content indirectly via ads. Reddit users understand their data has value and agree to license it to Reddit in exchange for a free service they choose to use.

Content owners should be allowed to decide whether or not AI can be trained on their content. The market should decide the value of that content. Users should be aware of how and when their content is used. It's essential that, as an industry, we make faster progress. 

Content owners have always been allowed to monetize their content. Those same principles must apply to AI.

Listener Spotlight

Lauren, based in Austin, Texas, leads marketing at a tech startup and tunes in during her commute.

Her favorite episode? The February 2024 conversation with Atif Rafiq, award-winning author and former executive at McDonald’s and Amazon. In it, Atif breaks down the decision sprint process to help teams move faster and make better choices.

You can listen to that episode right here.

We always enjoy hearing from our listeners. Want to be featured in an upcoming newsletter or episode? Drop a comment and let us know how you listen and which episode has stayed with you the most.

Worth A Read📚

Every year, millions of people run into challenges with their insurance claims. Many are denied. The process is slow, frustrating, and hard to navigate.

One company is working to change that, with the help of AI.

Source: NBC News

It started with a mysterious lump that turned into an aggressive cancer. One family felt powerless. Then, by chance, they met Zach Veigulis, a data scientist who wanted to use AI to help people fight back against denied claims.

What happened next showed how AI could become a real tool for those struggling with health insurance.

You can read more about the story here.

📣 Share your Thoughts and Leave a Review!

We want to hear what you have to say! Your feedback helps us improve and ensures we continue to deliver valuable insights to our podcast listeners. 👇

Until next time, stay curious! 🤔

We want to keep you informed about the latest happenings in AI. Here are a few stories from around the world worth reading:

  • Distillation isn't just for spirits. Here's how it’s helping make AI models smaller and cheaper.

  • Frontline workers will now get AI support on the job, thanks to this new initiative.

  • Netflix’s CEO shares how generative AI was used in one of their latest shows and what it means for the future of entertainment.

That's a Wrap for This Week!

This week’s conversation is a reminder that we can’t talk about AI without covering the full picture. That means celebrating what works and addressing what doesn’t.

We talked about guardrails. Why they matter. And why building them doesn’t have to be complicated.

The key is clarity, consistency, and strong human oversight.

We hope the episode gave you ideas for how to bring these principles into your own work.

Until next time, keep questioning, keep innovating, and we’ll see you in the future of work 🎙️

If you liked this newsletter, share it with your friends!
If this email was forwarded to you, subscribe to our Linkedin’s newsletter here to get the newsletter every week.