Can We Stop Runaway AI?

Echoing another article I recently posted in my Hey! What’s New? column, an article in The New Yorker, written by Matthew Hutson, points out that “increasingly, we’re surrounded by fake people. Sometimes we know it and sometimes we don’t. They offer us customer service on Web sites, target us in video games, and fill our social-media feeds; they trade stocks and, with the help of systems such as OpenAI’s ChatGPT, can write essays, articles, and e-mails. By no means are these AI systems up to all the tasks expected of a full-fledged person. But they excel in certain domains, and they’re branching out.”

Many researchers involved in AI, he notes, believe that today’s fake people are just the beginning, he writes. “In their view, there’s a good chance that current AI technology will develop into artificial general intelligence, or A.G.I. – a higher form of AI capable of thinking at a human level in many or most regards.”

A smaller group argues that A.G.I.’s power could escalate exponentially. If a computer system can write code – as ChatGPT already can – then it might eventually learn to improve itself over and over again until computing technology reaches what’s known as “the singularity”: a point at which it escapes our control. “In the worst-case scenario envisioned by these thinkers, uncontrollable AIs could infiltrate every aspect of our technological lives, disrupting or redirecting our infrastructure, financial systems, communications, and more. Fake people, now endowed with superhuman cunning, might persuade us to vote for measures and invest in concerns that fortify their standing, and susceptible individuals or factions could overthrow governments or terrorize populations.”

But, adds Hutson, “the singularity is by no means a foregone conclusion. It could be that A.G.I. is out of reach, or that computers won’t be able to make themselves smarter. But transitions between AI, A.G.I., and superintelligence could happen without our detecting them; our AI systems have often surprised us. And recent advances in AI have made the most concerning scenarios more plausible. Large companies are already developing generalist algorithms: last May, DeepMind, which is owned by Google’s parent company, Alphabet, unveiled Gato, a ‘generalist agent’ that uses the same type of algorithm as ChatGPT to perform a variety of tasks, from texting and playing video games to controlling a robot arm.”

“Five years ago, it was risky in my career to say out loud that I believe in the possibility of human-level or superhuman-level AI,” Jeff Clune, a computer scientist at the University of British Columbia and the Vector Institute, has said. (Clune has worked at Uber, OpenAI, and DeepMind; his recent work suggests that algorithms that explore the world in an open-ended way might lead to A.G.I.) Now, he said, as AI challenges “dissolve,” more researchers are coming out of the “AI-safety closet,” declaring openly that A.G.I. is possible and may pose a destabilizing danger to society.

Few scientists want to halt the advancement of artificial intelligence. The technology promises to transform too many fields, including science, medicine, and education. But, at the same time, many AI researchers are issuing dire warnings about its rise. “It’s almost like you’re deliberately inviting aliens from outer space to land on your planet, having no idea what they’re going to do when they get here, except that they’re going to take over the world,” Stuart Russell, a computer scientist at the University of California, Berkeley, and the author of Human Compatible, has said. Disturbingly, some researchers frame the AI revolution as both unavoidable and capable of wrecking the world. Warnings are proliferating, but AI’s march continues. How much can be done to avert the most extreme scenarios? If the singularity is possible, can we prevent it?

For the whole scenario in this fast-changing technology area, check out Can We Stop Runaway AI? | The New Yorker.