Alex - stock.adobe.com
EU, U.S. at odds on AI safety regulations
The U.S. under the new Trump administration changes course on AI safety, in contrast with the EU, as uncertainty lingers about other Biden-era federal cybersecurity efforts.
A new presidential administration in the U.S. rescinded its predecessor's executive order on AI safety this week, while the European Union will begin enforcing its own new regulations beginning next month, potentially putting multinational companies in a regulatory bind.
Amid a flurry of fresh executive orders from the Trump administration as it took over this week came the rescinding of others made by the Biden administration, including Executive Order 14110 of Oct. 30, 2023, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The order had called for federal data privacy legislation, among other requirements for trustworthy AI systems and safety standards, tasking NIST to create a risk management framework for generative AI and update its Secure Software Development Framework (SSDF) to encompass generative AI and foundation models.
At the same time, the first two chapters of the European Union's Artificial Intelligence Act will enter enforcement Feb. 2, with several more entering enforcement Aug. 2, and further restrictions on what the legislation calls high-risk AI systems going into effect in August 2027. The initial restrictions include a list of prohibited AI practices, including certain uses of biometric and facial recognition data. By the August deadline, providers and deployers that don't comply with the prohibition on AI practices could be subject to administrative fines of up to $35 million Euros or up to 7% of its total worldwide annual turnover for the preceding financial year.
For now, action on AI safety in the U.S. might fall to state and local governments, along with efforts by private-sector groups such as the Cloud Security Alliance's AI Safety Initiative and the Coalition for Secure AI.
Katell ThielemannAnalyst, Gartner
"Global organizations face a riddle wrapped in a mystery inside an enigma," said Katell Thielemann, an analyst at Gartner, in an email to Informa TechTarget this week. "This leaves them with few good options besides looking for the common denominators that cross most requirements, while at the same time finding ways to experiment and innovate to remain competitive. … But this is the new reality for the foreseeable future, as harmonization is unlikely any time soon."
EU AI Act: Too much, too soon?
Some industry analysts said they were concerned that a regulation such as the EU's AI Act looks to deploy controls against a technology that is still so nascent and rapidly evolving, it's difficult to know what will even be relevant in a matter of a few months.
"Let's say a year or two ago you'd developed a regulation limiting the size of a large language model -- by now, that would be completely, spectacularly wrong, because the technology is evolving that fast," said Steven Dickens, principal analyst at HyperFrame Research. "It's so nascent nobody knows where it's going to go, which is scary for regulators, and quite rightly so -- it's really scary for the world -- but a bit of a 'wait and see' might not be a bad thing."
Another analyst said he was particularly concerned about sections of the EU law that apply to AI systems designated high-risk, most of them set for enforcement by 2027. These sections require risk management, data governance and technical documentation practices that might be hard to follow, especially when developers use third-party AI services, according to Rob Strechay, an analyst at TheCube Research.
"Devs and their companies that plug in an API from an AI platform as a service [tool] could get in trouble with the law" under these provisions, he said.
Other sections of the EU AI Act pertaining to high-risk systems call for notifications to the European Commission about noncompliant systems including "the information necessary for the identification of the noncompliant AI system, the origin of the AI system and the supply chain." However, there aren't yet standards or industry consensus on some key aspects of AI supply chain security, such as model and data provenance and signing, according to a paper published by Google engineers in April 2024.
There are economic and cybersecurity downsides to the EU's approach to regulation as well, said Chris Hughes, chief security adviser at software supply chain security company Endor Labs and CEO at Aquia, a cloud and cybersecurity digital services firm.
"The EU's approach already has some companies avoiding the EU market, or not releasing specific products and features to the EU market," Hughes said. "Which may impact not just consumers but economic prosperity and national security in some cases, especially as we see the tie between commercial technology and national security increasingly intertwined."
U.S. DHS board upheavals jangle nerves
Another Biden executive order on cybersecurity issued Jan. 16 has not yet been rescinded, but the Trump administration has disbanded all existing Department of Homeland Security advisory boards, including the Cyber Safety Review Board (CSRB), and there's some talk among legislators about disbanding the Cybersecurity and Infrastructure Security Agency (CISA) as well. Biden's Executive Order 14144 of Jan. 16 also set forth guidelines for AI cybersecurity, including updates to NIST's SSDF.
"While the CSRB has been disbanded, it is absolutely critical that [cybersecurity] work continues to progress at the federal level," said Brian Fox, co-founder and CTO of software supply chain security management company Sonatype and a governing board member at the Open Source Security Foundation, in an email this week. "CISA's work, in particular, is a security blanket that we cannot afford to lose … the agency operates as a guiding voice for the private sector's cybersecurity workforce."
Here, Fox cited incidents such as the Salt Typhoon attacks, which had been investigated by the CSRB, and the Ascension ransomware attack, which resulted in hospitals relying on handwritten notes and unable to provide care.
"Without this protection and guidance, sophisticated state-backed threat actors have a much easier path into the networks of American organizations," Fox said. "[Incidents such as Salt Typhoon and Ascension] will only become more frequent."
Meanwhile, the EU's Cyber Resilience Act (CRA) and Digital Operational Resilience Act (DORA) both went into their first phase of enforcement over the last month. The CRA places liability for insecure code on manufacturers of commercial products, including code that comes from open source libraries. DORA adds further requirements for business continuity and cybersecurity for financial institutions.
As with the EU's General Data Protection Regulation, multinational companies might just comply with the more stringent EU regulations across the board, according to Dickens.
"Even if the U.S. decides to be regulation-light, big international Fortune 500 companies will probably want to follow those best practices anyway, so why not just adopt them?" he said.
However, smaller businesses in the U.S. should have access to information about the contents and provenance of software as well, Dickens added.
U.S. cyberdefense and national security are also on the line as domestic regulations develop, Fox said.
"Now, more than ever, cybersecurity is one of the most important factors of national security," he said. "Regardless of CISA's future, it is critical to come together across industries and sectors. Private organizations, cybersecurity foundations and consortiums, and public entities must increase intelligence sharing and continue setting guidance around best practices."
Beth Pariseau, senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.