Sunday, August 3, 2025

The Future of AI and the Changes We'll Face: Insights from a Conversation with Sam Altman

I recently watched a podcast conversation with Sam Altman, CEO of OpenAI, and it got me thinking deeply about AI's impact on our lives. His discussion goes beyond simple explanations of technological advancement to raise fundamental questions about humanity's future and our sense of purpose.

Raising a Child and Witnessing the Speed of Change

Altman shared his experience raising his 4-month-old son, describing his amazement at watching his child acquire new abilities every day. There's a curious parallel between his perspective on child development and how we view AI progress.

His observation that "it's amazing to see the child do things every day that they couldn't do before" mirrors the rapid pace of AI development we're witnessing today. What's particularly striking is his acknowledgment that "evolutionarily, we're finely tuned to love and be fascinated by children," yet he embraces this feeling rather than dismissing it.

This offers insight into our relationship with AI. While our fascination with technological progress might be natural, we also need to critically examine where it's leading us.

The Future of Education: Will Universities Disappear?

Altman mentioned that he doesn't think his child will attend university. This isn't just a personal opinion—it raises fundamental questions about education in the AI era.

He pointed out that "unlike the parent generation that transitioned from a world without computers to one with them, our children will never have been smarter than AI." This is a truly shocking perspective.

However, I think he's missing something crucial here. The essence of education isn't just about acquiring information—it's about learning critical thinking, collaboration, and human values. No matter how intelligent AI becomes, the value of uniquely human experiences, emotions, and social interactions will remain important.

The Future of Work: We Need New Economic Models

Altman presented an interesting perspective on job changes due to AI. Instead of pessimistic predictions about disappearing jobs, he emphasized the infinite nature of human desires.

His point that "people's desire for more things, better experiences, and higher social status seems fundamentally infinite" makes sense. Historically, during the Industrial Revolution, people worried about job losses, but new types of employment eventually emerged.

However, his proposed "token-based wealth distribution" system needs more careful consideration. The idea of distributing AI tokens to all 8 billion people worldwide is intriguing, but implementing it would face numerous technical and political barriers.

The Crisis of Purpose: Can Humans Still Be the Main Characters?

The most profound discussion centered on human purpose. Altman acknowledged that "creativity and intelligence touch the core of who we are and how we evaluate ourselves."

His personal anecdote was particularly striking. While testing GPT-5, he felt "useless" when the AI perfectly answered questions he couldn't understand. This shows that even someone at the forefront of AI development experiences these emotions.

But he also offered a hopeful perspective. He cited examples of software developers still finding meaningful work while using AI tools, and noted that humans have historically always placed themselves at the center of their narratives.

Privacy and Legal Frameworks: Urgent Challenges

One critical issue Altman highlighted is the lack of legal protection for AI conversations. People share their most personal stories with ChatGPT, but there's no legal protection like doctor-patient confidentiality or attorney-client privilege.

This is a serious problem. With AI acting as therapists and counselors, the fact that these conversations could be used as evidence in court poses significant risks to users. Policymakers need to address this issue urgently.

AI Competition: A New Kind of Arms Race

Altman compared the current AI industry competition to past processor speed races. The focus has shifted from megahertz competition to benchmark competition, and now to actual usability and value creation.

However, his mention of competition around "self-improving AI" or "systems smarter than all humans combined" is more concerning. This goes beyond simple corporate competition to potentially determining humanity's future.

Democratizing Technology: A World Where Everyone Becomes a Developer

One of Altman's most intriguing visions is AI completely democratizing technology. People who can't code will be able to create the software they want using natural language.

This would be a truly revolutionary change. Currently, many people with great ideas give up because they lack the technical skills to implement them. If AI removes these barriers, creativity and ideas alone could be sufficient to create value.

The Value of Humanity: What Still Matters

Throughout the conversation, Altman emphasized human interest in other humans. As he put it, "humans are obsessed with other people." No matter how advanced AI becomes, humans will still be interested in other people's stories.

This is an important insight. No matter how perfect AI-generated content becomes, we'll still want the "real person" behind it. This could be the key reason humans maintain value in the AI era.

Between Concern and Hope

Altman also expressed concerns about AI's impact on mental health. Problems like social media's dopamine addiction could emerge with AI too. The growing number of people spending entire days talking to AI companions is particularly worrying.

But he also offered a hopeful message from a historical perspective. Just as the transistor's invention enabled the computer revolution, AI is just one step in humanity's long journey of progress.

Conclusion: Finding Wisdom in Uncertainty

What impressed me most about Altman's conversation was his honesty. Even someone at the forefront of AI development admits they can't predict the future accurately. His statement that "nobody knows what happens next" is both humble and realistic.

But this uncertainty isn't necessarily bad. It means we have room to shape the future together. What's important is responding wisely while protecting human values and dignity, rather than being swept away by technological progress.

The challenges of the AI era are certainly significant, but human creativity, adaptability, and our care and love for each other will remain our greatest assets. Ultimately, technology is just a tool, and how we use it is still our choice.


Share: