metamorworks - stock.adobe.com
The call for an AI pause points to a major concern
The call to stop creating LLMs for six months comes as AI systems become more powerful and are moving too fast. There's a need to address data and privacy concerns.
The growing popularity of generative AI systems and large language models is causing concern among many AI experts, including those who helped create the systems.
This week, more than 1,500 AI researchers and tech leaders, including Elon Musk, Stuart Russell and Gary Marcus, signed an open letter by the nonprofit Future of Life Institute calling on all AI labs and vendors to pause giant AI experiments and research for at least six months.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter says.
The organization and the signatories ask that researchers should cease training of AI systems more potent than OpenAI's GPT-4. During that time, AI labs and experts should join to implement "a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."
Future of Life did not respond to a request for comment at the time of publication.
The likelihood of a pause
For some, the goal of Future of Life, which is dedicated to mitigating the risk of AI and other technologies, feels impossible to achieve.
"It's a prisoner's dilemma," said Sarah Kreps, director of the Cornell Tech Policy Institute at Cornell University, referring to the paradox in decision analysis in which two self-interested entities don't produce the best result. "There's no way to collectively get all of the different entities that are working on these language models to collectively pause."
The letter is reminiscent of a call from former President Obama in 2009 to disarm to have a world without nuclear weapons, Kreps said.
Obama pledged the United States would give up its nuclear weapons -- but only once everyone else had gotten rid of theirs.
Similarly, not only is it hard to get all the AI labs and vendor to pause the training of systems that are more powerful than GPT-4, but most labs would also probably continue to train their LLMs once they realize others were not stopping.
"It looks good on paper, but it's completely unrealistic," Kreps said.
But CEOs and other executives of some of the tech vendors that develop generative and other AI systems joined the dozens of academicians and researchers as signatories to the Future of Life document.
"Stability AI supports efforts to improve governance and transparency in artificial intelligence. While we don't agree with everything in this letter, such as an enforced pause on development, we share the spirit of this initiative," according to a spokesperson for open source AI vendor Stability AI, widely known for its image-generating platform, Stable Diffusion.
"In line with our commitment to open development, we welcome further dialogue to discuss emerging challenges and develop new solutions that benefit everyone."
The letter asks for self-regulation and also government action to impose a moratorium when and if a system more advanced than GPT-4 comes out.
That's a big "ask," said Michael Bennett, director of the education curriculum and business lead for responsible AI at Northeastern University.
"Pick any technological system -- and it doesn't have to be anywhere as complex as generative AI or as esoteric as generative AI -- and there's almost always unanticipated as well as unintended consequences of the implementation and the introduction into broader society of the technology," Bennett said.
The case for a pause
While some of the conditions in the letter may seem impossible or, at best, implausible, the fact that they are mainly from AI experts may add balance to recent excitement surrounding AI technology, he continued.
The AI arms race has generated intense interest in different iterations of ChatGPT, GPT-4 and Google's Bard, among others, over the past few months.
As a result, the AI world has seen increased attention from consumers, government agencies and enterprises. While some in the conversation about generative AI technology have expressed concerns about the unexpectedly rapid advancement of technology, most of the discussion has characterized by enthusiasm.
"[They're trying] to generate and make clear for purity of energy and focus on both those questions: what this technology means and how -- presumably through regulation -- to shape its development and implementation and essentially govern its fate," Bennett said of the signatories.
Michael BennettDirector of education curriculum, Northeastern University
The fact that GPT-4 is markedly more advanced than GPT-3 shows that the next chapters of the LLM narrative will be about successive versions that will be more powerful than one can imagine, Bennett added.
It would then be possible to imagine a world in which science and reality begin to mix. It could come the point at which, instead of humans working to create AI, humans and AI could work together to create more powerful AI, he said.
"We've seen how quickly the last two iterations of OpenAI's work has arrived, and presumably that's with largely human teams. And if those teams become 50-50 human-AI teams or predominantly AI teams ... then we might see something shockingly advanced in a short period of time," Bennett said.
Among the tech industry executives who signed on to the pause letter is Barry Devlin, founder of data management firm 9sight Consulting.
Devlin thinks the main problem is the way generative AI garners the information to produce its outputs.
"This rapid, profit-driven growth in generative AI -- which uses as its lexicon the clearly biased, deeply flawed, and completely un-curated corpus of internet text and imagery -- is a massive, unregulated social experiment," Devlin said. "It will almost certainly be used to spread far more disinformation at far greater cost -- climate, human, social, and economic -- than any good that could ever come from it."
For venture capitalist and founder of Rain Capital Chenxi Wang, an AI pause is a way to slow down how fast the systems are being developed.
"Let's take a step back and look at all the societal, security, and privacy impact of moving so fast, and let's put some guardrails to this whole movement," Wang said.
While the excitement surrounding these systems is warranted, there hasn't been enough action on the training data and privacy safeguards for these models, she added.
Wang conceded that it might be ambitious to completely pause the whole of AI. However, a slowdown and a deeper look at how these systems work will help address the potential impacts the systems will have on society later on, she added.
"Generative AI is generating new information as opposed to making a decision," Wang said. "We need to have assurance that that information is valid."
An alternative to a pause
While the fears around generative AI are understandable, a pause is not the correct response, maintained Kashyap Kompella, founder and analyst at RPA2AI.
Alternatives to a pause are more transparency for AI and LLMs in particular.
"The risks of LLMs can be better assessed if their training datasets and architectures are disclosed and documented," Kompella added.
Other alternatives include independent audits of the systems and responsible marketing practices, he said.
"If responsible marketing practices, instead of hyperbole, is used in AI marketing, the true nature of AI systems will be understood, and risk mitigation measures can be put in place," Kompella said.
While many agree that a pause is highly unlikely, the open letter could still be helpful for the AI community to reevaluate the systems they put forth.
"It's like an apocalyptic warning," Bennett said. "If this letter is useful in helping quicken the thought and the actions of our policymakers, then great."
However, if the response from the public engenders no change and, instead, the letter is seen as hyperbolic, then that will be problematic, he said.