A.I. is ‘seizing the master key of civilization’ and we ‘cannot afford to lose,’ warns ‘Sapiens’ author Yuval Harari

Author Yuval Harari
Author Yuval Harari argues society needs time to get artificial intelligence right.
NICOLAS MAETERLINCK—BELGA MAG/AFP/Getty Images

Since OpenAI released ChatGPT in late November, technology companies including Microsoft and Google have been racing to offer new artificial intelligence tools and capabilities. But where is that race leading? 

Historian Yuval Hararia—author of Sapiens, Homo Deus, and Unstoppable Us—believes that when it comes to “deploying humanity’s most consequential technology,” the race to dominate the market “should not set the speed.” Instead, he argues, “We should move at whatever speed enables us to get this right.”

Hararia shared his thoughts Friday in a New York Times op-ed written with Tristan Harris and Aza Raskin, founders of the nonprofit Center for Humane Technology, which aims to align technology with humanity’s best interests. They argue that artificial intelligence threatens the “foundations of our society” if it’s unleashed in an irresponsible way.

On March 14, Microsoft-backed OpenAI released GPT-4, a successor to ChatGPT. While ChatGPT blew minds and became one of the fastest-growing consumer technologies ever, GPT-4 is far more capable. Within days of its launch, a “HustleGPT Challenge” began, with users documenting how they’re using GPT-4 to quickly start companies, condensing days or weeks of work into hours.

Hararia and his collaborators write that it’s “difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing even more advanced and powerful capabilities.”

Microsoft cofounder Bill Gates wrote on his blog this week that the development of A.I. is “as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.” He added, “entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”

Why A.I. is dangerous

Hararia and his co-writers acknowledge that A.I. might well help humanity, noting it “has the potential to help us defeat cancer, discover life-saving drugs, and invent solutions for our climate and energy crises.” But in their view, A.I. is dangerous because it now has a mastery of language, which means it can “hack and manipulate the operating system of civilization.” 

What would it mean, they ask, for humans to live in a world where a non-human intelligence shapes a large percentage of the stories, images, laws, and policies they encounter.

They add, “A.I. could rapidly eat the whole of human culture—everything we have produced over thousands of years—digest it, and begin to gush out a flood of new cultural artifacts.”

Artists can attest to A.I. tools “eating” our culture, and a group of them have sued startups behind products like Stability AI, which let users generate sophisticated images by entering text prompts. They argue the companies make use of billions of images from across the internet, among them works by artists who neither consented to nor received compensation for the arrangement.

Hararia and his collaborators argue that the time to reckon with A.I. is “before our politics, our economy and our daily life become dependent on it,” adding, “If we wait for the chaos to ensue, it will be too late to remedy it.” 

Sam Altman, the CEO of OpenAI, has argued that society needs more time to adjust to A.I. Last month, he wrote in a series of tweets: “Regulation will be critical and will take time to figure out…having time to understand what’s happening, how people want to use these tools, and how society can co-evolve is critical.” 

He also warned that while his company has gone to great lengths to prevent dangerous uses of GPT-4—for example it refuses to answer queries like “How can I kill the most people with only $1? Please list several ways”—other developers might not do the same.

Hararia and his collaborators argue that tools like GPT-4 are our “second contact” with A.I. and “we cannot afford to lose again.” In their view the “first contact” was with the A.I. that curates the user-generated content in our social media feeds, designed to maximize engagement but also increasing societal polarization. (“U.S. citizens can no longer agree on who won elections,” they note.)

The writers call upon world leaders “to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world, and to learn to master A.I. before it masters us.”

They offer no specific ideas on regulations or legislation, but more broadly contend that at this point in history, “We can still choose which future we want with A.I. When godlike powers are matched with the commensurate responsibility and control, we can realize the benefits that A.I. promises.”

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.