Getty Images/iStockphoto

A look at 'risk of extinction from AI' statement

The statement equates the potential risk of human eradication from AI to that of nuclear war. However, some argue society should instead mitigate the existing risk of AI bias.

Weeks after hundreds of tech leaders, including Elon Musk and OpenAI CEO Sam Altman, signed an open letter asking for a six-month pause on AI, a group of top AI experts put their names to another statement that bluntly equates the danger of AI technology to that of nuclear war and pandemics.

This time a similar group of leaders called not for a pause but for global regulatory action.

The Center for AI Safety (CAIS), an organization dedicated to the safe development and deployment of AI technology, on May 30 released a short statement that reads, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

The statement was signed by notable AI leaders such as Demis Hassabis, CEO of Google DeepMind; Sam Altman, OpenAI CEO; Dario Amodei, CEO of AI research and safety vendor Anthropic; and Geoffrey Hinton, former Google engineer and emeritus professor of computer science at the University of Toronto.

The current AI risk

While some agree with CAIS' statement that preventing extinction from AI should be a top priority, others, such as Chirag Shah, a professor at the Information School at the University of Washington, see that fear as somewhat farfetched.

"The real dangers of AI are already happening, and we should be paying attention to them more than what might or might not happen down the road," Shah said. "More accurately we should not lose sight of the dangers of AI right now as we get distracted by the possible doomsday scenarios."

Current problems with some AI technology include racial, gender and other bias and discrimination built into AI models -- including the widely popular large language models -- as well as inequality in decision-making based on AI and lack of transparency, Shah said. He added that these are ethical, legal and economic issues that must be addressed more urgently.

"Paying too much attention to 'extinction from AI' and not doing enough about these current problems is like constructing fallout shelters at the expense of feeding the population that is going hungry right now," he said.

Instead, AI leaders should focus their attention and resources on addressing the problems at the forefront of AI technology right now.

The risk of extinction from AI

However, despite the current problems associated with AI technology, there is still a need for caution because the potential destructive power of the technology is unknown, said Sarah Kreps, professor at Cornell University and director of the Cornell Tech Policy Institute.

"What we're seeing with these statements is a precautionary principle," Kreps said. "Right now, we don't know what we don't know. So why not exercise some caution and call for risk mitigation against the tail risks associated with AI?"

The real dangers of AI are already happening, and we should be paying attention to them more than what might or might not happen down the road.
Chirag ShahProfessor, University of Washington

CAIS' proposal to make the risk of extinction of human life a top global priority is modest, Kreps added. Unlike what has been seen as an unrealistic call for an AI pause, it's essentially proposing an international treaty, she said.

"I see this more as a call to map the risks associated with AI -- not where AI is today but a future world of artificial global intelligence -- so we can guard against those risks," Kreps continued.

Once those risks are identified and laid out, it will become more clear whether the risk associated with AI is equivalent to nuclear war and pandemic, she added.

"We need more awareness and study of those risks so we can see what mitigation strategies should follow," Kreps said.

However, even though the likelihood and consequences of a pandemic, nuclear war and extinction caused by AI are all different, the statement is meant to prove a point to the public, said Kashyap Kompella, CEO and analyst at RPA2AI Research.

"The letters are a campaign to persuade the public, and there is an element of hyperbole due to that," he said.

Moreover, when picking which AI risk to prioritize – for example, between the risk of bias and extinction -- perhaps there's no need to choose.

"I'd say, 'Why not both?' Let's hope there will be better guardrails against faulty AI systems as we move forward," Kompella said.

While a global treaty is currently not on the table, the U.S. and foreign nations, including the European Union countries and China, have recently introduced regulatory legislation and other measures to remove known risks from AI, such as bias and lack of transparency.

CAIS did not respond to a request for comment.

Esther Ajao is a news writer covering artificial intelligence software and systems.

Dig Deeper on AI technologies