US officials optimistic on AI but warn of risks, abuse
Federal government leaders at RSA Conference 2024 touted the benefits of AI pilot programs but also outlined how a variety of threat actors are currently abusing the technology.
Listen to this article. This audio was generated by AI.
SAN FRANCISCO -- Federal government officials at RSA Conference 2024 touted the enormous benefits of artificial intelligence but also emphasized the need to protect against risks and potential abuse of the technology.
Artificial intelligence and specifically generative AI once again dominated the world's biggest cybersecurity conference, and throughout the week government leaders weighed in on the technology and what it means for both the public and private sectors. In his RSA Conference 2024 keynote on Monday, Secretary of State Antony Blinken unveiled the State Department's U.S. International Cyberspace and Digital Strategy, which outlines how the U.S. government plans to engage and partner with other nations on a range of technology issues, including AI.
"When it comes to AI, again, as confident as we are in its potential, we're deeply aware of its risks: from displacing jobs, to generating false information, to promoting bias and discrimination, to enabling the destabilizing use of autonomous weapons," he said during his keynote. "So we're working with our partners to prevent and address these issues."
Blinken highlighted President Joe Biden's executive order last fall to create standards for safe and secure development of AI, as well as the recent creation of the U.S. AI Safety Institute Consortium, which includes more than 200 private companies such as Google, Microsoft, Nvidia and OpenAI.
"The private sector is a critical partner in this effort, which is why we've worked with leading AI companies on a set of voluntary commitments, like pledging to security testing before releasing new products, developing tools to help users recognize AI-generated content," Blinken said.
The State Department also began piloting GenAI projects this year to assist with searching, summarizing, translating and even composing documents, which Blinken said frees up staff members to have more face time instead of screen time.
Alejandro Mayorkas, secretary of the Department of Homeland Security, also discussed applications for AI technology in DHS pilot projects. For example, one project combines all criminal investigation reports and uses AI "to identify connections that we would not otherwise be aware of," he said.
"What I would love is for this audience to take a look at DHS in five years and say, 'Wow, I cannot believe how they are using AI to advance their mission.' That is a redefining of the perception of government, not as slothful and labyrinthian but nimble, dynamic and really pushing the envelope ourselves," Mayorkas said.
However, there are significant risks from both internal and external use of AI, he said. To that end, DHS last month released safety and security guidelines for U.S. critical infrastructure organizations regarding AI usage, as well as potential outside threats. Those threats include AI-enhanced social engineering attacks such as deepfake audio and video.
But Mayorkas emphasized that organizations must also consider the risks associated with AI design and implementation. One thing that was made clear in the inaugural meeting of DHS' newly formed AI Safety and Security Advisory Board, he said, was that safe and responsible development of the technology go hand in hand. "We cannot consider the safe implementation to mean a potential perpetuation of implicit bias, for example," he said.
Malicious use of AI
A frequent topic of discussion this week was how threat actors can use and abuse AI technology to enhance their attacks. During a Wednesday session, Rob Joyce, former director of cybersecurity at the National Security Agency, said threat actors of all types have already begun using AI tools to improve phishing emails and other social engineering attacks.
"We're not seeing AI-enabled technical exploitations. We're certainly seeing AI used to scan and find vulnerabilities at scale," Joyce said. "We're seeing AI used to understand some of the technical publications and new CVE publications to help craft N-day exploits. But the tremendous development of 0-days for hacking activity [is] not here yet today."
In a keynote panel discussion Tuesday, Lisa Monaco, deputy attorney general at the U.S. Department of Justice, called AI "an incredible tool" that the DOJ is using for a variety of tasks, from analyzing and triaging the more than 1 million tips received by the FBI each year to assisting with the massive Jan. 6 investigation. But she also said the DOJ is "constantly" reviewing potential threats from AI.
"We are concerned about the ability of AI to lower the barriers to entry for criminals of all stripes and the ability of AI to supercharge malicious actors, whether it's nation-states who are using it as a tool of repression and to super charge their ability to engage in digital authoritarianism [and] the ability of AI to supercharge the cyber threat and allow hackers to find vulnerabilities at scale and speed and to exploit them." Monaco said.
John Hultquist, chief intelligence analyst at Mandiant, told TechTarget Editorial that while threat actors are undoubtedly abusing AI technology and will continue to do so, he believes it will ultimately benefit cybersecurity and defenders far more than adversaries.
"Ultimately, AI is an efficiency tool, and the adversary is going to use it as an efficiency tool. And I think to a certain extent, the defenders actually have an advantage as far as that because we have the processes and other tools we can integrate it with," he said. "We control it; they don't necessarily control it. And we're constantly putting controls into it to reduce their ability to use it."
Senior security news writer Alex Culafi contributed to this report.
Rob Wright is a longtime reporter and senior news director for TechTarget Editorial's security team. He drives breaking infosec news and trends coverage. Have a tip? Email him.