- AI&TFOW
- Posts
- What Comes After AGI?
What Comes After AGI?
A Vision for a Freedom-Based Economy
Hello, AI enthusiasts from around the world.
Welcome to this week's newsletter for the AI and the Future of Work podcast.
Artificial General Intelligence, or AGI, is one of the most debated concepts in AI. Some view it as the next step in the field’s evolution. Others dismiss it as little more than an abstract idea.
Once we look beyond the term itself, the conversation becomes more interesting. We see a wide range of possibilities and responsibilities that come with the future of AI.
In this issue, we take a closer look at the misconceptions surrounding AGI, its true potential, and the bigger question of what our ultimate goal should be as humans.
Let’s dive into this week’s highlights 🚀
🎙️ New Podcast Episode With Jad Tarifi, Integral AI CEO
AGI is one of the most controversial ideas in AI. Everyone defines it differently, which makes it hard to agree on its potential applications for the future.
For Integral AI co-founder Jad Tarifi, the term has been diluted over time. It may have lost clarity, but not importance. At Integral AI, the focus is to develop innovations that can eventually reach Superintelligence.
Jad’s starting point is simple: AGI should be an AI model that learns autonomously, without a human in the loop. It can teach itself any skill. But is it really that straightforward?
In his conversation with PeopleReign CEO Dan Turchin, Jad explored the complex ethical and philosophical challenges of building AGI.
Once AGI can learn new skills, how do we contain it without unintended consequences?
Before it can learn on its own, it must first learn from humans, which means building for reliability and efficiency.
Only then can AGI be used to pursue the ultimate goal: freedom.
Freedom, Jad explains, is the highest form of existence. Yet the path toward it runs through entropy, a state of chaos and uncertainty. The first step is ensuring both humans and AI can survive in that environment. Only then can we work toward freedom.
This episode also covers:
Why world models are key for robotic learning and avoiding inefficiency and hallucinations.
How maximizing collective freedom should be AI’s meta-objective to prevent dangerous shortcuts.
What happens to the economy when AGI takes on most labor, and why the “price of action” becomes critical.
The next step in Integral AI’s roadmap: scaling AGI to automate science, industry, and society at large.
🎧 This week’s episode, featuring Jad Tarifi, inspired this issue.
Listen to the full conversation to hear more on why the future of AGI must stay anchored to one principle: freedom.
📖 AI Fun Fact
Professor Gary Marcus, cognitive psychologist and critic of large language models (LLM), recently wrote on Substack about a fundamental weakness in Generative AI: its inability to build robust world models.
What is a world model? Gary defines it as a computational framework that a system (whether a machine, a human, or another animal) uses to track what is happening in the world.
These models should be persistent, stable, and updatable (and ideally kept up to date). In Gary’s definition, they represent entities within a slice of the world and have been central to both classical AI and traditional software design.
Explicit world models are at the heart of software engineering.
LLMs are designed to function without world models. In cognitive psychology, reading a text requires constructing a mental model of its meaning. LLMs skip this process and still perform impressively well.
Yet much of what ails them comes from that design choice.
As frequent listeners and readers know, PeopleReign CEO Dan Turchin disagrees with the concept of AGI and, more importantly, with the concept that AI progress should be measured based on how effective it is at confusing humans into thinking it's a human.
He believes that progress should instead be measured by how AI helps solve real global problems: famine, war, pandemics, and global warming.
We've been given the gift of an amazing, transformative technology. If we use it responsibly, we'll achieve all other desired outcomes in the process: we'll create jobs, help humans find meaning in work, and replace dangerous tasks with safer ones.
World models are part of the answer, but let's not lose sight of the question we should be asking: how can AI benefit all humans?
Listener Spotlight
Tomas, a software developer in Colombia, shared his favorite episode. He chose season two, from January 2024, featuring Subha Tatavarti, CTO at Wipro. In that conversation, Subha reflected on deploying AI to 250,000 employees and analyzing requirements from more than 1,400 enterprise customers.
🎧 You can revisit that episode here.
We always enjoy hearing from listeners. Want to be featured in a future newsletter? Comment and tell us how you listen and which episode has stayed with you.
Worth A Read 📚
Staying with the theme of AGI, let’s look at Character.AI. The company once promised to reach Superintelligence. Today, it has an entirely different goal.
CEO Karandeep Anand often comes home to find his six-year-old daughter deep in conversation. She has snacks beside her, fueling her passion for mystery stories as she talks. She cannot type, but with voice commands she chats with Sherlock Holmes (or at least a Sherlock Holmes bot).
Millions of users are now turning to Character.AI for entertainment and education. The shift raises an important question: how did a company move from the pursuit of Superintelligence to becoming an AI-powered entertainment platform?
This Wired article tells the story in detail.
We want to hear what you have to say! Your feedback helps us improve and ensures we continue to deliver valuable insights to our podcast listeners. 👇
Until next time, stay curious! 🤔
We want to keep you informed about the latest happenings in AI.
Here are a few stories from around the world worth reading:
Teachers are beginning to embrace AI in classrooms. Learn how it could reshape education.
That's a Wrap for This Week!
This week’s conversation was a philosophical look at AGI. Too often we focus on the term itself instead of exploring the possibilities behind it.
No matter how we define the next stage of AI, humanity’s goals and needs must stay at the center. When we align progress with those priorities, AI can help society reach the next level.
We hope today’s discussion inspires you to look beyond labels and use AI to improve our lives, both now and in the future.
Until next time, keep questioning, keep building, and we’ll see you in the future of work 🎙️✨
If you liked this newsletter, share it with your friends!
If this email was forwarded to you, subscribe to our Linkedin’s newsletter here to get the newsletter every week.