- AI&TFOW
- Posts
- Who’s Really in Control as AI Agents Rise? [Newsletter #96]
Who’s Really in Control as AI Agents Rise? [Newsletter #96]
Control becomes the real challenge
Hello, AI enthusiasts from around the world.
Welcome to this week's newsletter for the AI and the Future of Work podcast.
When we create new systems or enter a new technological era, we want to understand our limitations, whether they are self-imposed or shaped by external factors. The reality is that we’re not comfortable with not knowing how far we can push our technology.
We want to know this because we aim to reach those limits, and with AI, it’s no different. However, our understanding of limitations in the AI era may need to be completely rethought.
We’re not ready for what’s coming, and more importantly, we don’t know who is. That’s what makes taking the next steps in AI development feel like a completely new experience.
In today’s conversation, we explore these dilemmas and remind ourselves that some are more than willing to take the risk to find out what’s next.
Let’s dive into this week’s highlights 🚀
New Podcast Episode With Oren Michels, Barndoor AI Co-Founder and CEO
Who is in control?
It’s the question many are asking about AI agents today. And it’s one that can send shivers down your spine, especially when you look at the current reality. We’re relying more and more on AI agents to take over parts of our work, and even parts of our lives.
The challenge is that the answer doesn’t fit within today’s terms. For Oren Michels, it comes down to understanding what AI agents are, and what they are not. Oren is an entrepreneur, investor, board member, and advisor to tech startups across continents.
He is also the co-founder and CEO of Barndoor AI, the control plane for agentic AI, and the founder who previously helped define the API management category with Mashery, acquired by Intel in 2013.
His résumé doesn’t stop there. He is also a Tony-nominated Broadway and Off-Broadway producer, with credits including Romeo + Juliet and Good Night, and Good Luck, starring George Clooney.
According to Oren, part of our dilemma with control is that AI agents are not like humans. They don’t understand what risk or a damaged reputation means. They don’t operate with fear of consequences, and they shouldn’t. AI doesn’t need these traits.
This creates an entirely new category of problem. If your agents can already write to your CRM, interpret your instructions, and act without life experience or fear of consequences, who is actually in control?
PeopleReign CEO Dan Turchin sat down with Oren Michels to discuss the need for guardrails, but with context and conditions we haven’t yet imagined.
The need is clear. A control system that defines what agents can do. But who gets to create this system? Not only that, this reality has already limited how we use AI. Our fear of the unknown has made us less willing to experiment.
In this conversation, we discuss this and more:
Securing AI agents is not like managing APIs. Traditional security and identity access tools were never designed to handle what agents can do.
Most so-called agentic AI is still glorified robotic process automation. So what will it take to truly unlock enterprise value?
How Barndoor AI’s “least privilege” framework for agents works, and why its permission logic goes far beyond the identity of the human using the tool.
You’re one probabilistic misfire away from a catastrophic outcome if a single AI agent has delete access to your CRM. And the ultimate responsibility always falls back on you.
The BYO AI parallel to BYOD, and why well-meaning employees using personal AI tools with company data may force the enterprise governance moment no one is ready for.
Why the same instinct that took Oren from API infrastructure to Broadway, and back to enterprise AI, may be exactly the mindset the agentic era demands from its builders.
Listen to the full episode to understand why guardrails are necessary, but not only about limiting access. They must become an evolving condition, shaped by almost endless specific contexts, making the idea of security one of the most challenging problems ahead.
📖 AI Fun Fact Article
Enrique Dans writes in Medium that it lies in how LLMs connect to the real world. The key driver behind this shift is the Model Context Protocol, or MCP.
MCP was developed by Anthropic and is now in the hands of the Agentic AI Foundation, under the Linux Foundation. Together, these groups aim to define the principles for the next architecture of the internet, and that has the potential to change our lives.
The idea behind these models is that they are not just receivers of instructions. They are active systems, capable of executing tasks, both simple and complex. That’s why major players like OpenAI, Google, and Microsoft are betting that MCPs will shape what comes next.
But every new protocol brings risk. Right now, there are still open questions around security, interoperability, and how competing protocols will evolve. Answering those questions will require us to think ahead, starting with how fast and how widely AI will be adopted as we move from graphical interfaces to agents and new standards.
PeopleReign CEO Dan Turchin considers giving AI agency to perform tasks on our behalf foundational. But as always, we need to be equally aware of what could go wrong, as well as what could go right.
Malicious actors are already eyeing the many ways MCP can be hacked, and are hoping it’s adopted broadly before current authorization and auditability vulnerabilities are limited or eliminated. And there are many.
We can’t afford to authorize agents to browse and purchase items with access to our credit cards and personal information without knowing they can’t be spoofed or have data siphoned away mid-transaction.
The current state of MCP is analogous to the early days of HTTP, before SSL encryption was introduced decades ago.
For those with gray hair, we’ve seen these patterns exploited by bad actors for centuries, going back to the early days of industrialization, when train hijackings and bank robberies were common.
As a rule, let’s commit to spending as much time innovating around personal safety and responsible AI use as we do enabling new agentic behaviors.
Listener Spotlight
Bryce in Sydney, Australia, chose episode #87 as his favorite. It’s a conversation with Mark McCrindle, author, futurist, and popular TEDx speaker, focused on AI and the future of the job market.
🎧 You can revisit that episode here.
We always enjoy hearing from listeners. Want to be featured in a future newsletter? Reply to this email and tell us how you listen and which episode has stayed with you.
Worth A Read
There’s one question therapists may start asking you. Not out of judgment, but out of empathy. According to the Journal of the American Medical Association, specifically a paper published in JAMA Psychiatry, that question is simple: “how are you using AI?”
The paper, which you can read here, doesn’t try to define AI as good or bad. Instead, it encourages both therapists and patients to better understand the bonds people are forming with AI to cope with mental health challenges.
As more people turn to chatbots to manage anxiety or depression symptoms, mental health professionals see value in helping them understand the difference between a conversation with a chatbot and one with a real person. AI may be available 24/7, but it comes with limits. Patients are human. Chatbots aren’t.
This NPR article explores how therapists are responding to this growing use of AI in mental health, and why some see it as an opportunity.
📣 Share your Thoughts and Leave a Review!
We'd love to hear from you. Your feedback helps us improve and ensures we continue bringing valuable insights to our podcast community. 👇
Until next time, stay curious! 🤔
We want to keep you informed about the latest happenings in AI.
Here are a few stories from around the world worth reading:
OpenAI founder Sam Altman is backing a bill that would shield AI labs from liability in critical cases. Here’s more.
Why do we tell ourselves horror stories about AI? This article explores the answer.
Anthropic’s Mythos AI can spot weaknesses in almost every computer on the planet. Here’s why that poses a major risk.
That's a Wrap for This Week!
This week’s conversation is one we need to have. Not only now, but again over time.
AI will keep evolving. So will our desire to control who has access, who decides the next step, and who doesn’t.
The challenge remains the same. We don’t yet know what control will look like in the future. Some see that uncertainty as a risk. We see it as an opportunity to explore and find the answer.
We hope this week’s conversation inspires you to embrace uncertainty, explore what control could look like, and stay open to what’s still unfolding.
If you liked this newsletter, share it with your friends!
If this email was forwarded to you, subscribe to our newsletter here to get it every week.

