- AI&TFOW
- Posts
- AI Bias and Women in STEM [Newsletter #93]
AI Bias and Women in STEM [Newsletter #93]
AI’s Diversity Problem Is Massive
Hello, AI enthusiasts from around the world.
Welcome to this week's newsletter for the AI and the Future of Work podcast.
The future of AI might already be biased. To understand why, we need to go back to how offices looked in the 1980s.
At that time, it was common for teams, divisions, and even entire companies to have no women on staff.
You might expect things to look very different 40 years later. But the reality is more complex.
In fact, there were more women in STEM fields back then than there are today. That gap matters more than we often realize. The lack of diversity doesn’t only affect workplaces. It shapes the systems we’re building and the future they will create.
This week’s conversation explores the consequences of having too few women in AI leadership, and why it’s a problem we need to take seriously.
Let's dive into this week's highlights! 🚀
🎙️ New podcast episode with Lisa Davis, author and CIO
What do you think the main problem with AI is?
It's not that it's taking away jobs.
It might not even be energy consumption.
It's bias. And it directly shapes the systems we're building right now.
AI is becoming structural in the way we do business and in our personal lives. It's evolving fast, and that evolution is happening largely without input from minorities and other underrepresented groups.
If we don't course-correct, the systems we build will reflect that gap, not as an accident, but by design.
To make matters worse, this is already happening.
STEM is becoming less diverse, even in 2026, when the trend should be the opposite.
Women accounted for 34% of the STEM workforce in the 1980s. Today, that number is 22%, and there are no clear signs of change.
Unfair? Yes.
Dangerous? Definitely.
As Lisa Davis explains, AI’s rapid growth raises a critical issue: if the people leading the AI revolution don’t represent a diverse population, the systems they build won’t reflect society.
Lisa is a technology executive who has served as CIO and tech leader for some of the world’s most complex organizations, including Intel, Blue Shield of California, the U.S. Marshals Service, and the Department of Defense.
She now focuses on shaping the next generation of leaders and advocating for women and diverse talent in STEM through her board work, executive coaching, and her forthcoming book, The Only Woman in the Room: How to Win in a Workplace Still Built for Men.
PeopleReign CEO Dan Turchin sat down with Lisa to explore the core problem behind AI bias. Lisa is direct about it and lays out an uncomfortable truth.
If women are missing from AI leadership, then the products and solutions we create might not work for half of the world’s population.
In this conversation, we discuss:
Why the decline in women’s representation in STEM is a crisis for the future of AI, not only for the workplace, and why the real risk isn’t AI itself, but the leadership teams making decisions about it.
How structural systems that were never designed for women to thrive need to be redesigned, not as a social gesture, but as a business imperative.
Why current corporate layoffs are being wrongly attributed to AI, and what leaders need to start saying out loud.
How girls and young women face ongoing challenges, from dropping out of math and science in middle school to having their leadership potential limited by organizations, and what parents and companies can do to intervene earlier.
What Lisa believes women who reach executive roles must do differently, and why most don’t.
🎧 This week's episode of AI and the Future of Work, featuring Lisa Davis, author and CIO, is now available.
Listen to the full episode to hear how Lisa outlines a different path to increasing diversity in AI, and why leaders must intentionally change their approach to unlock productivity and new ways of working.
📖 AI Fun Fact Article
AI makes us “smarter” faster. It pushes information into our minds and even offers insights.
But does that make us wiser? No.
No matter how much information AI helps us process, it doesn’t build judgment. It doesn’t improve discernment.
So what does?
Wisdom comes from reflection, not computation. That’s the idea Dr. Manish Chopra shares in a McKinsey article on the role of meditation
Meditation helps close the gap between AI’s constant flow of information and the slower, more deliberate nature of judgment. As Dr. Chopra puts it, in a world shaped by algorithms, the scarcest resource isn’t data. It’s discernment.
When reading this article, PeopleReign CEO Dan Turchin could only think of one word: Yes!
Dan insists we need more of this!
Purpose doesn’t come from algorithms, and fulfillment can’t be measured in tokens per watt. Being confronted with machines that seem intelligent creates an opportunity to reflect on our values and question what truth is, especially when so much around us is the product of something that may be deepfaked or even shallow-faked.
AI is the world’s way of forcing us to define our why.
It’s okay to pause long enough to hug a loved one, help someone in need, embrace pain, or do anything that reminds us we will always have free will, even when we choose to give machines agency.
🏆 Recognized Among the Best
This month, AI and the Future of Work was ranked #3 on Million Podcasts' Best HR Tech Podcasts in the US, validation for a community that already knew this conversation matters.
We're in the top 1% of podcasts globally. And we're just getting started.
To everyone who has listened, shared, reviewed, and stayed curious with us: tell a friend. More people need to hear this.
Listener Spotlight
Lenore writes from Zurich, and her favorite episode is #164, also one of Dan Turchin’s favorites. In that episode, Bruce Feiler, award-winning author and popular TEDx speaker, shares his perspective on how to navigate life transitions.
🎧 You can listen to that excellent episode here!
As always, we love hearing from you. Want to be featured in an upcoming episode or newsletter? Comment and share how you listen and which episode has stayed with you the most.
We'd love to hear from you. Your feedback helps us improve and ensures we continue bringing valuable insights to our podcast community. 👇
Worth A Read
What happens when programming models designed to support gender equity introduce new ethical inconsistencies?
Research published in Computers in Human Behavior Reports suggests this is already happening. Still, it doesn’t mean we should step back from building ethical AI.
In line with today’s conversation, Eric W. Dolan explores the deeper issue in PsyPost.
Some of the most widely used AI models tend to assign a female author even to phrases about stereotypically masculine activities, like playing football or wanting to be a firefighter.
At the same time, phrases associated with traditionally feminine stereotypes are almost always attributed to a female author, and rarely to a male one. These patterns reveal a clear asymmetry in how gender assumptions are applied inside the model itself.
So where does the problem begin? It starts with what we feed the models.
Human evaluation plays a central role. And as we’ve discussed, a lack of diversity in that input can lead to bias at scale. In some cases, the result is AI systems that are less inclusive than the data they were trained on.
You can explore more about this dilemma here.
Until next time, stay curious! 🤔
We want to keep you informed about the latest developments in AI. Here are a few stories from around the world worth reading:
Nvidia CEO Jensen Huang believes his company has achieved AGI. What does that mean in practice? You can read the answer here.
Can AI and art coexist ethically? This article explores the question and examines the connection between the two.
New research from Anthropic shows who is investing heavily in AI, who is being left behind, and the opportunities this may create.
That's a Wrap for This Week!
Women are essential for the future of AI.
This goes beyond companies filling quotas or highlighting representation numbers. What’s at stake is building AI systems that truly reflect inclusivity and diversity.
And this change can’t wait, the need is urgent. ut there is a path forward.
We hope this week’s conversation inspires you to take action, lead change in your field, and help shape a more inclusive future for AI.
If you liked this newsletter, share it with your friends!
If this email was forwarded to you, subscribe to our Linkedin’s newsletter here to get the newsletter every week.


