Foundation models explained: Everything you need to know neuro-symbolic AI
X

Beyond AI doomerism: Navigating hype vs. reality in AI risk

As AI becomes increasingly widespread, viewpoints featuring both sensationalism and real concern are shaping discussions about the technology and its implications for the future.

With attention-grabbing headlines about the possible end of the world at the hands of an artificial superintelligence, it's easy to get caught up in the AI doomerism hype and imagine a future where AI systems wreak havoc on humankind.

Discourse surrounding any unprecedented moment in history -- the rapid growth of AI included -- is inevitably complex, characterized by competing beliefs and ideologies. Over the past year and a half, concerns have bubbled up regarding both the short- and long-term risks of AI, sparking debate over which issues should be prioritized.

Although considering the risks AI poses and the technology's future trajectory is worthwhile, discussions of AI can also veer into sensationalism. This hype-driven engagement detracts from productive conversation about how to develop and maintain AI responsibly -- because, like it or not, AI seems to be here to stay.

"We're all pursuing the same thing, which is that we want AI to be used for good and we want it to benefit people," said Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University.

Long-term AI concerns

As AI has gained prominence, so has the conversation surrounding its risks. Concerns range from immediate ethical and societal harms to long-term, more hypothetical risks, including whether AI could pose an existential threat to humanity. Those focused on the latter, a field known as AI safety, see AI as both an avenue for innovation and a source of possibly devastating risks.

Spencer Kaplan, an anthropologist and doctoral candidate at Yale University, studies the AI community and its discourse around AI development and risk. During his time in the San Francisco Bay Area AI safety scene, he's found that many experts are both excited and worried about the possibilities of AI.

"One of the key points of agreement is that generative AI is both a source of incredible promise and incredible peril," Kaplan said.

One major long-term concern about AI is existential risk, often abbreviated as x-risk, the fear that AI could someday cause the mass destruction of humans. An AI system with unprecedented and superhuman levels of intelligence, often referred to as artificial general intelligence (AGI), is considered a prerequisite for this type of destruction. Some AI safety researchers postulate that AGI with intelligence indistinguishable from or superior to that of humans would have the power to wipe out humankind. Opinions in the AI safety scene on the likelihood of such a hostile takeover event vary widely; some consider it highly probable, while others only acknowledge it as a possibility, Kaplan said.

Some circles hold a prevailing belief that long-term risks are the most concerning, regardless of their likelihood -- a concept influenced by tenets of effective altruism (EA), a philosophical and social movement that first gained prominence in Oxford, U.K., and the Bay Area in the late 2000s. Effective altruists' stated aim is to identify the most impactful, cost-effective ways to help others using quantifiable evidence and reasoning.

In the context of AI, advocates of EA and AI safety have coalesced around a shared emphasis on high-impact global issues. In particular, both groups are influenced by longtermism, the belief that focusing on the long-term future is an ethical priority and, consequently, that potential existential risks are most deserving of attention. The prevalence of this perspective, in turn, has meant prioritizing research and strategies that aim to mitigate existential risk from AI.

What is AI doomerism?

Fears about extinction-level risk from AI might seem widespread; a group of industry leaders publicly said as much in 2023. A few years prior, in 2021, a subgroup of OpenAI developers split off to form their own safety-focused AI lab, Anthropic, motivated by a belief in the long-term risks of AI and AGI. More recently, Geoffrey Hinton, sometimes referred to as the godfather of AI, left Google, citing fears about the power of AI.

"There is a lot of sincere belief in this," said Jesse McCrosky, a data scientist and principal researcher for open source research and investigations at Mozilla. "There's a lot of true believers among this community."

As conversation around the long-term risks of AI intensifies, the term AI doomerism has emerged to refer to a particularly extreme subset of those concerned about existential risk and AGI -- often dismissively, sometimes as a self-descriptor. Among the most outspoken is Eliezer Yudkowsky, who has publicly expressed his belief in the likelihood of AGI and the downfall of humanity due to a hostile superhuman intelligence.

However, the term is more often used as a pejorative than as a self-label. "I have never heard of anyone in AI safety or in AI safety with longtermist concerns call themselves a doomer," Kaplan said.

Where does effective acceleration fit in?

To effective accelerationists -- or e/accs, as they're often known online -- short- and long-term risks alike are of little concern. E/accs generally agree that future AI will be extremely capable, McCrosky said. But accelerationists are unapologetically pro-technology, as outlined in venture capitalist Marc Andreessen's techno-optimist manifesto.

"The effective accelerationist response says we need to build AI as quickly as possible," Kaplan said. "Even if it does wipe out humanity, that is a good thing because it means that intelligence itself will advance in the universe."

Green sees e/accs as among the main drivers of AI doomerism rhetoric, as they often use the term pejoratively to dismiss those with concerns of AI risk. "It's basically a group signifier," he said. "If you use that word, then you're signifying you're part of the e/acc group. If you're having it thrown at you, then that's a signifier that the person thinks that you're part of this other group that they're opposed to."

Near-term AI concerns

Although those in AI safety typically see the most pressing AI problems as future risks, others -- often called AI ethicists -- say the most pressing problems of AI are happening right now.

"Typically, AI ethics is more social justice-oriented and looking at the impact … on already marginalized communities, whereas AI safety is more the science fiction scenarios and concerns," McCrosky said.

For years, individuals have raised serious concerns about the immediate implications of AI technology. AI tools and systems have already been linked to racial bias, political manipulation and harmful deepfakes, among other notable problems. Given AI's wide range of applications -- in hiring, facial recognition and policing, to name just a few -- its magnification of biases and opportunity for misuse can have disastrous effects.

"There's already unsafe AI right now," said Chirag Shah, professor in the Information School at the University of Washington and founding co-director of the center for Responsibility in AI Systems and Experiences. "There are some actual important issues to address right now, including issues of bias, fairness, transparency and accountability."

As Emily Bender, a computational linguist and professor at the University of Washington, has argued, conversations that overlook these types of AI risks are both dangerous and privileged, as they fail to account for AI's existing disproportionate effect on marginalized communities. Focusing solely on hypothetical future risk means missing the important issues of the present.

"[AI doomerism] can be a distraction from the harms that we already see," McCrosky said. "It puts a different framing on the risk and maybe makes it easier to sweep other things under the carpet."

Rumman Chowdhury, co-founder of the nonprofit Humane Intelligence, has long focused on tech transparency and ethics, including in AI systems. In a 2023 Rolling Stone article, she commented that the demographics of doomer and x-risk communities skew white, male and wealthy -- and thus tend not to include victims of structural inequality.

"For these individuals, they think that the biggest problems in the world are can AI set off a nuclear weapon?" Chowdhury told Rolling Stone.

We've already seen significant harm from AI. These are real harms that we should be caring a whole lot more about.
Jesse McCroskyPrincipal researcher for open source research and investigations, Mozilla

McCrosky recently conducted a study on racial bias in multimodal LLMs. When he asked the model to determine whether a person was trustworthy based solely on facial images, he found that racial bias often influenced its decision-making process. Such biases are deeply concerning and have serious implications, especially when considered in the context of AI applications, such as military and defense.

"We've already seen significant harm from AI," McCrosky said. "These are real harms that we should be caring a whole lot more about."

The issue with AI doomerism talk

In addition to fearing that discussions of existential risk overshadow current AI-related harms, many researchers also question the scientific foundation for concerns about superintelligence. If there's little basis for the idea that AGI could develop in the first place, they worry about the effect such sensational language could have.

"We jump to [the idea of] AI coming to destroy us, but we're not thinking enough about how that happens," Shah said.

McCrosky shared this skepticism regarding the existential threat from AI. The plateau currently reached by generative AI isn't indicative of the AGI that longtermists worry about, he said, and the path towards AGI remains unclear.

Transformers, the models underlying today's generative AI, were a revolutionary concept when Google published the seminal paper "Attention Is All You Need" in 2017. Since then, AI labs have used transformer-based architectures to build the LLMs that power generative AI tools, like OpenAI's chatbot, ChatGPT.

Over time, LLMs have become capable of handling increasingly large context windows, meaning that the AI system can process greater amounts of input at once. But larger context windows come with higher computational costs, and technical issues, like hallucinations, have remained a problem even for highly powerful models. Consequently, scientists are now contending with the possibility that advancing to the next frontier in AI may require a completely new architecture.

"[Researchers] are kind of hitting a wall when it comes to transformer-based architecture," Kaplan said. "What happens if they don't find this new architecture? Then, suddenly, AGI becomes further and further off -- and then what does that do to AI safety?"

Given the uncertainty around whether AGI can be developed in the first place, it's worth asking who stands to benefit from AI doomerism talk. When AI developers advocate for investing more time, money and attention into AI due to possible AGI risks, a self-interested motive may also be at play.

"The narrative comes largely from people that are building these systems and are very excited about these systems," McCrosky said. While he noted that AI safety concerns are typically genuine, he also pointed out that such rhetoric "becomes very self-serving, in that we should put all our philanthropic resources towards making sure we do AI right, which is obviously the thing that they want to do anyway."

Bringing discourses together

Despite the range of beliefs and motivations, one thing is evident: The dangers associated with AI feel incredibly tangible to those who are concerned about them.

A future with extensive integration of AI technologies is increasingly easy to imagine, and it's understandable why some genuinely believe these developments could lead to serious dangers. Moreover, people are already affected by AI every day in unintended ways, from harmless but frustrating outcomes to dangerous and disenfranchising ones.

To foster productive conversation amid this complexity, experts are emphasizing the importance of education and engagement. When public awareness of AI outpaces understanding, a knowledge gap can emerge, said Reggie Townsend, vice president of data ethics at SAS and member of the National AI Advisory Committee.

"Unfortunately, all too often, people fill the gap between awareness and understanding with fear," Townsend said.

One strategy for filling that gap is education, which Shah sees as the best way to build a solid foundation for those entering the AI risk conversation. "The solution really is education," he said. "People need to really understand and learn about this and then make decisions and join the real discourse, as opposed to hype or fear." That way, sensational discourse, like AI doomerism, doesn't eclipse other AI concerns and capabilities.

Technologists have a responsibility to ensure that overall societal understanding of AI improves, Townsend said. Hopefully, better AI literacy results in more responsible discourse and engagement with AI.

Townsend emphasized the importance of meeting people where they are. "Oftentimes, this conversation gets way too far ahead of where people actually are in terms of their willingness to accept and their ability to understand," he said.

Lastly, polarization impedes progress. Those focused on current concerns and those worried about long-term risk are more connected than they might realize, Green said. Seeing these perspectives as contradictory or in a zero-sum way is counterproductive.

"Both of their projects are looking at really important social impacts of technology," he said. "All that time spent infighting is time that could be spent actually solving the problems that they want to solve."

In the wake of recent and rapid AI advancements, harms are being addressed on multiple fronts. Various groups and individuals are working to train AI more ethically, pushing for better governance to prevent misuse and considering the impact of intelligent systems on people's livelihoods, among other endeavors. Seeing these efforts as inherently contradictory -- or rejecting others' concerns out of hand -- runs counter to a shared goal that everyone can hopefully agree on: If we're going to build and use powerful AI, we need to get it right.

Olivia Wisbey is associate site editor for TechTarget Enterprise AI. She graduated from Colgate University with Bachelor of Arts degrees in English literature and political science, where she served as a peer writing consultant at the university's Writing and Speaking Center.

Lev Craig contributed reporting and research to this story.

Next Steps

How to use the NIST CSF and AI RMF to address AI risks

Dig Deeper on AI technologies