- AI&TFOW
- Posts
- Who Is Really In Control? [Newsletter #38]
Who Is Really In Control? [Newsletter #38]
Unmasking Hidden Bias in AI
Hello, AI enthusiasts from all around the world! Welcome to the weekly newsletter for the AI and The Future of Work podcast.
Have you noticed how there are some important discussions that people would rather avoid?
AI is not exempt from this, and this issue covers the complex topic of data ethics.
After all, it's best to address this before AI becomes even more of a mainstay in our daily lives and changing it becomes increasingly more challenging.
Our podcast guest gets us thinking, and our AI fun fact teaches us about ecology and AI.
Let's dive into this week's highlights!
🎙️ New Podcast Episode With Dr. Brandeis Marshall, DataedX Group CEO and advocate for Responsible Data Science
Data can be weaponized, right?
This statement can, and probably will, make many people uncomfortable, but it's a real possibility.
At the same time, it's only one of the many discussions around data ethics out there.
Dr. Brandeis Marshall wants us to have these conversations now more than ever.
This is where data ethics comes in–using data responsibly so that it continues to add to or augment the human experience.

Brandeis is a leading advocate for responsible data science and the CEO of Dataedx Group, a data ethics and learning development agency dedicated to helping teams identify and address discrimination in data.
Dr. Marshall holds a master's and Ph.D. in computer science from Rensselaer Polytechnic Institute (RPI).
Our CEO, Dan Turchin, sat down with Brandeis in a thoughtful conversation about the strong link between data and ethics, even though some still say there isn't one.
Many people are involved in handling data, from the engineers to the Product VP, the C-suite, and the CEO.
And everyone who handles data has a responsibility to call out ethical issues, even if this is a new notion to us.
In this conversation, Brandeis and Dan discuss this and much more:
The hidden ways AI-powered companies can optimize human behavior and why these companies should be regulated like scientific entities.
Companies can prevent bias; everyone in the data pipeline is responsible for ethical decision-making. That's where data ethics comes into play.
Most businesses struggle with AI adoption, so Brandeis explains how these companies can bridge the AI gap and align data strategies with real business impact.
A patient-owned, portable medical record system sounds farfetched in our current healthcare system, but it could be the key to revolutionizing access and transparency in healthcare data.
How AI can be leveraged to expose systemic inequalities and provide better opportunities for marginalized communities.
Brandeis believes AI should be seen as a support tool rather than a replacement for human intelligence–one example is how AI can help neurodivergent individuals and enhance human decision-making.
Our latest AI and the Future of Work podcast episode featuring Dr. Brandeis Marshall inspired this issue.
🎧 Listen here to learn more of this thoughtful conversation and how Brandeis maintains an optimistic but realistic perspective on our responsibility with AI.
📖 AI Fun Fact Article
Like any transformative technology, AI comes with risks, and one of the most critical is the perpetuation of biases and systemic inequities. Now is the time to change this.
Ron Guerrier writes on CIO.com about how we can responsibly shape AI's future. To accomplish this, we must stop seeing it as a tool.
Instead, Guerrier invites us to see AI as a growing child and to help shape it, he suggests turning to ecology. Guerrier uses Dr. Urie Bronfenbrenner's ecological systems theory to describe the evolution of AI.
Here's what he means.
At the most basic level of AI is the "microsystem”—the developers, engineers, and users, with their biases and perspectives.
They directly interact with AI, and if there is no diversity, AI will continue to misrepresent and exclude marginalized communities.
This critical issue also has consequences in other aspects, as we discuss it further in today's episode.
The next level determines how AI is deployed and, perhaps more importantly, how it's regulated. It's called the "mesosystem" and involves even more crucial actors such as tech companies, governments, and researchers.
These aren't the only two levels. Guerrier describes three others: the exosystem, macrosystem, and chronosystem, each with essential aspects for a responsible AI future.
Our CEO, Dan Turchin, reminds us that AI is perfectly designed to replicate human bias. Thus, we must be aware of this when we decide how and where to use it.
As we use AI more and more, it will have increasingly impactful and sometimes unintended consequences on our teams and customers.
We're throwing the conversation over to you: What are your thoughts on responsible AI use? Let us know in the comments below!
Listener Spotlight
Antonio, from Baltimore, MD, is a college professor who listens while biking to campus.
Antonio's favorite episode is the excellent discussion with Armen Berjikly, the CTO of BetterUp, about humanizing work using AI to match coaches with employees.
You can listen to the episode here! 👇
On Using AI to Unlock Human Potential
As always, we love hearing from you!
Want to be featured in our next episode or newsletter? Comment and let us know how you tune in and your favorite episode.
We want to hear what you have to say! Your feedback helps us improve and ensures we continue to deliver valuable insights to our podcast listeners. 👇
🎙️ Worth a Listen!
Data privacy in the era of deepfakes is becoming ever more critical.
That's why it's important to celebrate Data Privacy Day.
So, we have a special compilation episode of AI and the Future of Work for you!
Data Privacy Day Special Episode: AI, Deepfakes & The Future of Trust
We revisit the most powerful conversations we've had with industry leaders who tackle some of today's biggest AI challenges.
This episode highlights the critical role of privacy, trust, and security, from deepfake detection to ethics, in the future of AI.
💻 AI in Perspective
In 2011, Marc Andreessen (yes, of Andreessen Horowitz fame) stated that "software is eating the world." He wasn't wrong.
Tech was accelerating at an impressive speed (for the time), and it transformed everything we did, from listening to music to commuting.
Half a decade later, Jensen Huang, founder of Nvidia, updated those words, stating: “Software is eating the world, but AI is eating software.”
Both of them are right, and now—using the same analogy—another question comes up in the discussion:
What is eating AI? This article offers a fascinating answer that might stand the test of time.
What do you think? We'd love to hear your thoughts!
🚀 Exclusive Webinar: PeopleReign + Workday Help – The Future of Employee Service
Are your IT and HR teams overwhelmed with service requests?
Join us for an exclusive webinar to discover how PeopleReign + Workday Help is transforming employee service with AI-powered automation. Learn how 5,000 automated actions, natural language processing, and pre-trained HR models can create a smarter, faster, and more efficient workplace.
What you'll learn:
✔️ How AI takes action—not just answers questions
✔️ Why zero-code automation delivers real results in less than 30 days
✔️ Live demo of PeopleReign’s integration with Workday Help
📅 Date: March 25, 2025
🕘 Time: 9:00 - 9:45 AM PDT
🔗 Save your spot now: Register Here
Don’t miss out—see how AI is redefining employee self-service!
Coming Up Next 🚀
Next week, we’re joined by Mona Sabet, author of Sail to Scale and a seasoned expert in corporate strategy, M&A, and scaling startups.
She’ll share insights on navigating the high-stakes journey from launch to exit, avoiding common pitfalls, and preparing for strategic growth. Whether you're an entrepreneur or an investor, you won’t want to miss this conversation!
That’s a Wrap for This Week!
From tackling hidden biases in AI to exploring the ethical implications of data-driven decisions, this edition has been all about the impact of responsible technology. Whether you're reflecting on Dr. Brandeis Marshall’s insights or considering how AI is shaping the future of business, we hope this newsletter sparked new ideas.
Until next time, keep questioning, keep innovating, and we’ll see you in the future of work! 🎙️✨
If you liked this newsletter, share it with your friends!
If this email was forwarded to you, subscribe to our Linkedin’s newsletter here to get the newsletter every week.