- AI&TFOW
- Posts
- Critical Thinking over Code [Newsletter #53]
Critical Thinking over Code [Newsletter #53]
Raising Responsible AI Leaders
Hello, AI enthusiasts from around the world.
Welcome to this week’s newsletter for the AI and the Future of Work podcast.
Your parents and grandparents might already be using AI.
Maybe they ask it questions. Maybe they rely on it daily. That’s exciting. But it can also feel unsettling.
AI is no longer just for a niche group of experts.
It’s becoming part of how we search, learn, and make decisions.
So the question becomes:
How do we help everyone not just use AI, but understand the risks that come with it?
This week’s episode is about leadership, responsible AI adoption, and how we can close the gap between innovation and trust.
Let’s dive in. 🚀
🎙️ New Podcast Episode With Tess Posner, AI4ALL CEO.
Your parents might be seventy, but they probably know their way around tech.
They can use AI for DIY tips or even much more sensitive issues, like medical diagnoses or financial advice.
AI is no longer niche. It’s everywhere.
That’s exciting, but it also raises important questions.
How do we make sure people trust the right information?
How do we help them spot hallucinations or double-check AI-generated answers?
Answering these questions is only a part of Tess Posner's daily goals.
She’s the CEO of AI4ALL, an Oakland-based nonprofit working to build the next generation of inclusive AI leaders.
The organization began in 2015 as a summer outreach program at Stanford, created by Dr. Fei-Fei Li, Dr. Olga Russakovsky, and Dr. Rick Sommer. Their mission: introduce high school girls to AI.
Tess joined in 2017 as founding CEO and brought with her more than a decade of work in digital inclusion across the public and private sectors.
She was named a 2020 Brilliant Woman in AI Ethics Hall of Fame Honoree.
She’s a graduate of St. John's and Columbia. She also has a successful musical career.
Her roles may have changed, but one thing hasn’t: Tess believes digital transformation only works when inclusive, dynamic people lead the way.
Dan Turchin, PeopleReign CEO, sat down with Tess to discuss how education in the era of AI is crucial and much more:
AI literacy is becoming more important every day, whether we’re using AI for simple tasks or more complex decisions.
Ethics is no longer optional. At AI4ALL, it’s a core part of how they teach the next generation of AI leaders.
Tess shared some of the big questions students ask, like “Which communities are most affected?” or even “Should we build this at all?”
AI can go in two directions. It can deepen inequality, or it can be a tool to drive real change. Tess shares the story of Maya, an AI4ALL student who became a responsible AI researcher.
It’s time to move past outdated thinking. Instead of penalizing AI use in schools, we should teach people how to use it to encourage curiosity and critical thinking.
🎧 This week’s episode of AI and the Future of Work, featuring Tess Posner, inspired this issue.
🎧 Listen to the full episode to hear why she believes we’re on a wild ride — and how we can make sure it leads to a better future.
📖 AI Fun Fact Article
Swapping LLMs isn't plug-and-play, even though many assume they’re interchangeable.
As Lavanya Gupta writes in VentureBeat, avoiding LLM lock-in is nearly impossible when building high-quality, resilient apps.
Her article walks through hands-on comparisons and real-world tests to show what can happen when switching between AI model families.
Cross-model migration reveals several layers of complexity.

Source: Lavanya Gupta using ChatGPT
Some issues are small — tokenizer quirks or formatting preferences.
Others can make migrations a hassle, like response structures or context window limits.
Tokenization differences can affect how inputs are processed and priced.
Context windows vary too. While some models handle up to 128,000 tokens, Gemini can support over 1 million.
This is why companies like Google, Microsoft, and AWS are investing in model-migration tools.
PeopleReign CEO Dan Turchin reminds us that LLMs, specifically AI agent frameworks, are pre-configured to do many tasks, but not necessarily do them perfectly.
User expectations are high.
We tend to compare every agentic application where AI is doing some task on its own to how a human would do the same task.
So before evaluating how to automate any use case, we should always ask: how would a human do this?
We should never let enthusiasm about bot capabilities distract us from the human experience.
Bots are prediction engines, but always remember they lack our innate human sensibilities.
When you prototype your own AI app, you'll quickly understand why the real future of work is humans, with a little bit of prompting help from AI agents.
Listener Spotlight
Pamela is a programmer for a defense contractor in Fort Wayne, Indiana. She listens to the podcast while gardening.
Her favorite episode is last year’s conversation with venture capitalist Allison Baum Gates sharing about her scrappy path to investing and how anyone can break into venture.
You can listen to that conversation here.
As always, we love hearing from you. Want to be featured in an upcoming episode or newsletter? Send us a quick message and tell us how you listen — and which episode has made the biggest impact.
📚 Worth a Read
What would you do with 26 more minutes in your day?
It’s a question many of us would love to answer.
That’s exactly what happened in the United Kingdom. The Government Digital Service ran a three-month trial with 20,000 employees using Microsoft’s M365 Copilot.

Source: Windows Forum
The results were fascinating. Saving 26 minutes a day was just the beginning!
You can read more about this real-world AI case here.
We'd love to hear your thoughts on this new artist–send us your comments!
We want to hear what you have to say! Your feedback helps us improve and ensures we continue to deliver valuable insights to our podcast listeners. 👇
👋 Until Next Time: Stay Curious
We want to keep you informed about what’s happening in the world of AI.
Here are a few recent stories worth your time:
The FDA launched an AI tool to speed up scientific reviews and improve how research is published.
New research shows AI didn’t just boost productivity by 4X—it also helped increase wages.
AI can support marketing tasks, but brand loyalty still depends on human creativity.
That's a Wrap for This Week!
This week’s conversation is a reminder that AI moves fast, but if it leaves people behind, it fails its purpose.
Tess Posner wants to change that by closing the existing digital gap. She challenged us to use AI to accelerate solutions—not inequality—and we hope she did the same for you.
You can lead in AI, too.
It starts with asking the right questions and putting people first.
Keep questioning, keep innovating, and we'll see you in the future of work! 🎙️✨
If you liked this newsletter, share it with your friends!
If this email was forwarded to you, subscribe to our Linkedin’s newsletter here to get the newsletter every week.