Getty Images/iStockphoto

Catastrophic AI risks highlight need for whistleblower laws

Developers want AI-specific laws to protect them if they need to expose potential risks from AI systems. Legal experts say existing whistleblower laws may offer little protection.

As AI use grows, so potentially will the number of whistleblowers with claims of dangerous practices observed at companies, ranging from employment discrimination to the threat of human extinction. However, without specific AI laws to protect them, these whistleblowers face personal risks. 

The dangers of speaking out about AI were highlighted in an open letter this week by a handful of current and former employees from OpenAI, Google DeepMind and Anthropic. The signers noted both the benefits and risks of AI and are calling for legal protections for whistleblowers in the AI industry, as well as industry practices that facilitate more open discussion.

"Ordinary whistleblower protections are insufficient," they wrote in their letter, titled "A Right to Warn about Advanced Artificial Intelligence."

Attorneys who work with whistleblowers agree. Whistleblower protections are fragmented and apply to only specific circumstances under federal or state statutes. This patchwork may not cover all types of behavior, leaving whistleblowers vulnerable.

"I fear it will take a catastrophe to drive both greater oversight and stronger whistleblower protections for tech-sector whistleblowers," said Dana Gold, senior counsel and director of advocacy and strategy at the Government Accountability Project, a whistleblower protection and advocacy organization.

"We should be very grateful to the AI employees who are speaking out now to prevent one, and we should all decry any reprisal they suffer," she said.

AI law legal landscape 

California lawmakers are advancing an AI-specific whistleblower protection law. State Sen. Scott Wiener, a San Francisco Democrat, is sponsoring legislation (SB 1047) to protect workers at companies developing large "frontier" AI systems. This move responds directly to growing concerns over the potential risks posed by advanced AI technologies.

The proposed AI law was approved 32-1 by the state Senate last month and now heads to the state Assembly for further action.

There are currently some 60 federal whistleblower laws that cover various areas, such as meat safety, environmental issues, securities violations, tax evasion and airline safety, according to Stephen Kohn, an attorney who represents whistleblowers at Kohn, Kohn & Colapinto in Washington D.C.

"There should be no reason why a tech or AI whistleblower should have to look at other laws to gain protection," Kohn said.

Nonetheless, he noted that it's possible for an AI whistleblower to bring a concern to authorities. About 45 states, including California, have a public policy exception that covers a range of issues, including the potential effects of AI technology.

Whistleblowers do not need to prove an actual violation of the law to be protected under the exception. If an AI developer has "a valid concern that something could have a catastrophic impact," they should be covered under the public policy exception, Kohn said.

But Geoffrey Rapp, a law professor at the University of Toledo, said at the federal level, an employee who blows the whistle on problems related to AI "would have little protection unless they can situate their reported concerns into a place where statutory protection exists, such as securities fraud violations under Dodd-Frank."

For instance, "if the company lies to its investors about the risks of its AI developments, a whistleblower might have some protection for raising concerns about the fraud against investors, even if the whistleblower has no protection for raising concerns associated with catastrophic AI risks," Rapp said.

Industry response

OpenAI, in response to the open letter, said it supports government regulation of the AI industry and has been actively involved in discussions with policymakers.

"We're proud of our track record providing the most capable and safest AI systems, and believe in our scientific approach to addressing risk," an OpenAI spokesperson said in a statement. "We agree that rigorous debate is crucial, given the significance of this technology, and we'll continue to engage with governments, civil society and other communities around the world."

The firm said it has avenues for employees to express their concerns, including through an anonymous hotline. But that may not be enough, according to experts.

More is needed

"Tech workers who work in privately held companies like OpenAI are, unfortunately, very vulnerable legally," Gold said. There is no one law that protects corporate or private employees "for reporting the range of serious abuses."

Tech workers who work in privately held companies like OpenAI are unfortunately very vulnerable legally.
Dana GoldSenior counsel, director of advocacy and strategy, Government Accountability Project

Gold noted that while Congress passed the Cyber Incident Reporting for Critical Infrastructure Act in 2022, which requires reporting of certain cyber incidents and threats, "there is no whistleblower protection section which would support employee enforcement of the act," she said.

Gold said she would like to see more leadership in the tech sector and "more proactively embracing the role of employees as early-warning mechanisms to prevent problems that they acknowledge are enormous."

Given the unlikelihood of a legislative fix, Gold said, "the tech industry can lead in implementing professional ethics standards like engineers, lawyers and doctors have to regulate the tech industry as a profession."

As part of that ethics standard, tech employers can make "contractual commitments to zero-tolerance for retaliation," Gold said.

Patrick Thibodeau is an editor at large for TechTarget Editorial who covers HCM and ERP technologies. He's worked for more than two decades as an enterprise IT reporter.

Dig Deeper on Talent management