Saturday, July 5, 2025

Living with AI: The Future of Technology and Human Choice

We're already living with AI today. To be precise, we're living with "narrow AI." Narrow AI systems consist of computers, massive databases, and algorithms that select something from those databases. This simulates intelligence but isn't true intelligence. And here's the key point: we need to separate intelligence from consciousness.

The narrow AI we're familiar with is being used in digital assistants, online shopping, and healthcare. Recently, OpenAI's CEO Sam Altman announced plans to introduce personal chatbots to transform American Medicare, showing us fascinating future possibilities. From autonomous vehicles to AI and facial recognition technology that identifies criminals in crowds, we're voluntarily accepting all of these developments.



The Dual Nature of AI: Freedom and Oppression

But some people are being forced to live with AI. The same facial recognition technology used to identify criminals is also being used to oppress ethnic minorities. The intensive surveillance of Uyghurs in China's Xinjiang region shows an incredible level of invasiveness. These surveillance systems are spreading throughout China and could ultimately be exported to the West.

Autonomous weapons systems and threats to democracy from deepfakes are also realities we face. Ken McCallum, Director General of Britain's MI5, warned that AI could mimic real people, making it impossible to distinguish truth from falsehood and undermining social structures. Deepfake technology poses a threat to democracy and could be used by hostile nations to spread chaos and misinformation in upcoming elections.

Yoshua Bengio, currently one of the leaders in AI, wrote this about the morality of AI systems: "People need to understand that current AI and reasonably predictable AI in the future do not and will not have a moral sense or moral understanding of right and wrong."

Two Faces of Dystopia

When thinking about whether we can live with AI, two famous dystopias come to mind: George Orwell's *1984* and Huxley's *Brave New World*. Neil Postman provided an excellent analysis of these two books. Orwell warned that we would be overwhelmed by externally imposed oppression. But in Huxley's vision, no Big Brother is needed to deprive people of their autonomy, maturity, and history. People would come to love their oppression and worship the technology that disables their capacity to think.

Orwell feared that what we hate would ruin us. Huxley feared that what we love would ruin us. Currently, we have a love-hate relationship with technology, and it seems like both things are happening simultaneously on a massive scale.

The brilliant entomologist E.O. Wilson wrote: "The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology, and it is terrifically dangerous." He said we're in a very dangerous situation until we answer the great philosophical questions that philosophers abandoned generations ago: Where did we come from? Who are we? Where are we going?

The Journey Toward AGI and Its Dangers

Where we're heading, according to some opinions, is Artificial General Intelligence (AGI). This means building AI systems that match or exceed human capabilities—essentially constructing superintelligence. We're entering the realm of transhumanism.

If these ideas were simply from science fiction, we could dismiss them, but such thinking is embedded in statements from some of the world's most brilliant scientists. British astronomer Lord Rees wrote: "I can't be confident that the dominant intelligences a few centuries hence will have any emotional resonance with us, even though they may have an algorithmic understanding of how we behaved."

Stephen Hawking warned: "The real risk with AI isn't malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble."

Eliezer Yudkowsky from the Machine Intelligence Research Institute presents an even more extreme view: "If somebody builds a too-powerful AI, under the present circumstances, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."

Real Risks and Alternative Perspectives

On the other hand, there are very different voices. In their book *Why Machines Will Never Rule the World*, Jobs, Landgrebe, and Barry Smith argue that just as physics shows the impossibility of building perpetual motion machines, the mathematics of complex systems shows it's impossible to design AI machines with even crow-level cognitive abilities. Therefore, they say the singularity will never happen.

As a mathematician, I'm very sympathetic to this view, and it's supported by one of the world's finest mathematicians living in this city, Sir Roger Penrose.

*Nature*, the world's most famous scientific journal, wrote in June 2023: "It's time to talk about AI's known risks. Forget the machine apocalypse. What's needed is effective regulation to limit the social harms that artificial intelligence is already causing." I emphasize this statement because I think it's one of the most important things we need to consider. Moving away from seemingly science-fictional hype, we've already developed to the point of posing enormous risks to society.

The Dilemma of Regulation and Control

OpenAI's Sam Altman said: "Regulation is going to be critical and will take time to figure out. The current generation of AI tools isn't that scary. But I think we're not that far away from potentially scary ones."

Stuart Russell, a famous pioneer in AI, has proposed several principles for living with AI. First, restrict AI systems' objectives solely to maximizing the realization of human objectives. Second, keep AI uncertain about what those objectives are so it continues to ask questions. Third, have AI try to understand the nature of those objectives through continuous observation of human behavior.

The regulation problem is related to the control problem, because we can't regulate what we can't control.

Future Scenarios and Biblical Perspectives

MIT physics professor Max Tegmark, in his book *Life 3.0*, presents possible scenarios if we develop AGI. The scenario he spends the most time on is the "Omega Project," where one leader takes over the world to create a global government.

This powerful controlling authority would take over the world economy in the following way: under the pretext of fighting crime and terrorism and rescuing people in medical emergencies, everyone could be required to wear security bracelets that combine the functionality of Apple Watches with continuous uploading of location, health status, and overheard conversations. Any unauthorized attempt to remove or disable them would trigger the injection of lethal toxins into the forearm.

When I read this, I was reminded of a much older scenario from the Book of Revelation, because Revelation also talks about a future world government. If we're going to take seriously what people like Max Tegmark say about the future, I'd say listen carefully to what Revelation and the biblical worldview have to say before dismissing them.

Revelation speaks of a beast rising from the sea and another beast rising from the earth. This beast causes all people to receive a mark on their right hand, so that no one can buy or sell without this mark. This shouldn't be accepted as merely symbolic. As C.S. Lewis pointed out long ago, symbols are used to represent reality.

In 2 Thessalonians, Paul says plainly: "That day will not come unless the rebellion occurs first and the man of lawlessness is revealed, the man doomed to destruction. He will oppose and will exalt himself over everything that is called God or is worshiped, so that he sets himself up in God's temple, proclaiming himself to be God."

AI Worship and New Religion

The director of the Montreal Center for Applied Ethics recently wrote: "We are about to witness the birth of a new kind of religion. In the coming years or even months, we will see the emergence of sects devoted to artificial intelligence worship." This is already happening, as certain AI systems are beginning to display capabilities usually attributed to divinity: immortality, omniscience, and omnipresence.

These super-AIs have connectivity like prayer through the internet. ChatGPT has oracle-like abilities, answering almost any question, providing life advice, and even producing scripture almost instantly. They have no human-like needs or desires and require only electricity.

I think we need more thinking about these kinds of things. This is very close to what the world's top thinkers are projecting about AGI.

A Christian Alternative: The Reverse Movement

The fundamental idea of transhumanism is that humans become gods by trusting in technology. Notice the movement to transform humans into superintelligent gods. What's the answer to this? The answer is a movement in the opposite direction. The central message of Christian faith is that God became human.

We are a particular kind of human, and there's pressure to continuously change us through genetic engineering, cybernetics, and cyborg engineering to become superintelligent beings. But wait—God showed His approval of the humans He created by personally becoming human. Shouldn't we think about this?

When people talk about hopes that AI will solve the problem of human death and transform the nature of human happiness, I smile and say, "You're too late." When they say, "What do you mean? We haven't even gotten there yet," I say, "You're too late." Because the problem of human death was solved 20 centuries ago when God raised Jesus Christ from the dead.

Second, you're hoping to regain life by uploading your brain to silicon or something else, but there's something infinitely better. I call this the "divine upgrade." The next step is: "Yet to all who did receive him, to those who believed in his name, he gave the right to become children of God." And the second step promised to those who receive him is: "The trumpet will sound, the dead will be raised imperishable, and we will be changed. For the perishable must clothe itself with the imperishable."

Conclusion: Ultimate Hope

In conclusion, regardless of what stage AI reaches in your lifetime, your children's lifetime, or your grandchildren's lifetime, as Christians, let us lift our heads and reaffirm our belief that this world didn't hear the last word when God's Son visited 20 centuries ago. He who is to come will surely come, and He will receive to Himself all who have trusted in Him.

The lawless one will appear, but there's a promise: "The Lord Jesus will overthrow him with the breath of his mouth and destroy him by the splendor of his coming." Against transhumanism's movement to make humans into gods through technology, Christianity's message that God became human points in a completely different direction.

This isn't simply speculation about the future, but certain hope based on events that have already occurred in history. Whatever direction AI development takes, the ultimate answer lies not in technology but in Christ, who has already conquered death and risen again—this is the Christian perspective.

Share: