- AI&TFOW
- Posts
- How AI Transforms Chip Design [Newsletter #74]
How AI Transforms Chip Design [Newsletter #74]
Closing the Hardware Gap
Hello, AI enthusiasts from around the world.
Welcome to this week's newsletter for the AI and the Future of Work podcast.
We’re investing heavily in new AI applications and startups. Innovation feels fast, and the pace keeps accelerating. Yet progress could move even faster if one major obstacle didn’t slow everything down; chip development remains expensive and slow.
No matter how much money we invest in AI software, the disconnect with slow hardware development is still hard to overcome.
But there is reason for optimism. AI might help solve the very bottleneck that holds it back.
Let’s dive into this week’s highlights! 🚀
🎙️ New Podcast Episode with Faraj Aalaei, Cognichip CEO
Any AI startup can go from zero to $100 million in weeks, sometimes even days.
AI startup funding has surged in recent years as investors, venture capitalists, and large companies pour millions into teams chasing the next breakthrough.
Faraj Aalaei reminds us that we often overlook something essential.
Beneath all the rapid progress and futuristic breakthroughs lies a hidden truth: the chips powering AI today were designed six or seven years ago.
In technology, that’s a lifetime.
The semiconductor industry simply can’t keep pace with the explosive growth of AI, and that lag creates inefficiencies in an era where speed defines success.
Faraj also acknowledges that improving the semiconductor industry is difficult. Chips and semiconductors present unique challenges. They must be designed for a future of unknown uses and applications, which often leads to overdesign. They’re also physical devices that require extensive testing by human experts.
That’s where the challenge grows. Faraj warns that the industry will face a severe talent shortage, with as many as one million engineers missing by the 2030s.
His response is straightforward. Address the problem directly.
Faraj is the founder and CEO of Cognichip, an AI company creating the world’s first Artificial Chip Intelligence, or ACI, to design semiconductors with AI.
He sat down with PeopleReign CEO Dan Turchin to explain how using AI to design semiconductors removes the tedious overdesign step, reducing both time and cost.
This week's conversation explores this fascinating industry and much more:
How Faraj learned the importance of understanding customer needs and delivering real value after a dramatic failure in his first venture, a car rental business.
Time and cost remain critical in chip development and across many other industries, and AI can play a meaningful role in improving both.
The semiconductor industry is now worth $600 billion, yet it continues to grow at a pace that experts view as unsustainable.
Humans will remain essential in the AI design workflow, providing the final approval or sending a design back for refinement.
AI will create more jobs than it replaces, and the real challenge will be preparing people to step into those new roles.
Asking thoughtful, respectful questions is a key part of long-term success, especially for founders.
🎧 This week's episode of AI and the Future of Work, featuring Faraj Aalaei, Cognichip CEO.
Listen to the full episode to hear more about how Faraj's approach to chip design aims to avoid black boxes and design errors that could lead to unsafe outcomes.
📖 AI Fun Fact Article
Two hundred prominent figures, including Nobel laureates and early AI leaders, issued a clear warning about the risks of unchecked AI development during the recent UN General Assembly in New York.
They argue that without enforceable limits, AI could worsen threats such as engineered pandemics, large-scale disinformation, and autonomous weapons operating without human oversight.
John Marshall writes in WebProNews that the open letter urges world leaders to establish international red lines by the end of 2026 to prevent the most dangerous uses of AI.
Geoffrey Hinton (often called the “godfather of AI,” and who left Google in 2023 to speak openly about these concerns) is one of the main supporters. The letter states that voluntary commitments from companies are not enough and calls for governments to set clear, verifiable standards that every AI provider must follow.

Source: khaborwala
PeopleReign CEO Dan Turchin reiterates that we can and must enforce responsible use of AI without constraining innovation. It is not about politicizing the issue.
Today’s debate should not be centered on catastrophic risk. That topic is too easy to dismiss as prematurely optimizing for too many unknowns.
Instead, AI safety efforts should focus on requiring every technology company using AI to be responsible for the downstream unintended impact on users of automated decisions.
As a community, we must stop focusing on what could go right and safeguard against what could go wrong now.
Here are the three tenets of the pledge Dan asks every tech company to take:
We will only deploy AI in production that is transparent, predictable, and configurable. Users know when automated decisions are made on their behalf.
The same user data will reliably result in the same AI output.
And when AI makes mistakes, the decision-making process can be adjusted to correct the error.
This simple set of three tenets is enforceable, immediately valuable, and will create a culture of AI risk awareness.
We must still prepare for potential catastrophic outcomes that may pose real risk in the near future, but in a responsible way that does not distract us from protecting humans now, while adoption is still relatively low.
Listener Spotlight
Tina is a roboticist from Santa Monica, California. Her favorite episode is #301 with William Osman, a prolific YouTube creator whose inventive science and engineering videos have been viewed more than 500 million times, where he talks about the future of creativity.
🎧 You can listen to that excellent episode here!
As always, we love hearing from you. Want to be featured in an upcoming episode or newsletter? Comment and share how you listen and which episode has stayed with you the most.
We'd love to hear from you. Your feedback helps us improve and ensures we continue bringing valuable insights to our podcast community. 👇
⚖️ Worth a Read
Wolf River Electric watched clients cancel contracts one after another with no clear explanation. Concerned, the sales team began investigating, and the truth turned out to be stranger than expected.
The Minnesota solar contractor had supposedly settled a lawsuit with the state’s Attorney General over deceptive practices. That alone would explain why customers walked away.

Source: Nick J. Kasprowicz, general counsel for Wolf River, reviewing online search results showing false A.I.-generated claims about the company. Tim Gruber for The New York Times
There was only one issue. The government had never sued Wolf River Electric.
Yet when people searched for the company, Gemini, Google’s AI platform, surfaced claims about a lawsuit and other incriminating details that were entirely false.
The company responded by suing Google for defamation, raising an important question. Who is accountable when AI generates damaging errors?
Read more about this fascinating case here.
The Human-First Guide to Responsible AI
PeopleReign CEO Dan Turchin shares his latest article in Forbes Technology Council, a must-read for leaders shaping the future of work through AI.
As AI transforms how we live and work, Dan calls for a human-first approach that keeps ethics, transparency, and accountability at the center of innovation.
If you care about how technology can elevate humanity rather than replace it, this guide is for you.
Until Next Time: Stay Curious 🤔
We want to keep you informed about the latest developments in AI. Here are a few stories from around the world worth reading:
Two Oscar-winning actors are raising eyebrows after partnering with an audio research company to create AI versions of their iconic voices.
A new survey reveals a surprising trend. Most people cannot identify AI-generated music, and 97 percent of participants answered incorrectly.
What is an AI “superfactory”? Learn how Microsoft linked data centers 700 miles apart to create a new type of system.
👋 That's a Wrap for This Week!
This week's conversation tackled one of AI’s biggest challenges, creating technology that keeps pace with current demands.
After all, building new AI applications means little without the right infrastructure to support them.
It is one more example of how AI is reshaping the way we work and pushing us to rethink long-standing assumptions. 🎙️✨
If you liked this newsletter, share it with your friends!
If this email was forwarded to you, subscribe to our Linkedin’s newsletter here to get the newsletter every week.
