Getty Images/iStockphoto

How the Executive Order on AI Will Impact Healthcare Cybersecurity

President Biden’s executive order on safe, secure, and trustworthy AI emphasizes the need to establish rigorous security standards, which will have an impact on healthcare cybersecurity.

Artificial intelligence (AI) continues to become ingrained into our society, and the regulations and guidance that govern it are evolving to match. In October 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, building upon previous guidance such as the Blueprint for an AI Bill of Rights and the AI Risk Management Framework.

Together, these works aim to guide the nation through the development and deployment of safe, secure, and transparent AI technologies across all sectors. This includes healthcare, an industry in which AI has become an integral part of care coordination and data management.

But the Biden administration acknowledged that developing and governing reliable AI tools will not be an easy feat.

“Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure,” the executive order stated.

“At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.  This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”

With these risks in mind, two major focus areas of the executive order were security and privacy – subjects that healthcare practitioners were already deeply familiar with.

For example, the executive order directed various federal agencies to work on establishing standards and best practices for AI security.

The National Institute of Standards and Technology (NIST) will be required to set standards for red-team testing to ensure the safety of AI tools prior to public release. The “AI red-teaming” will involve a structured testing effort to identify flaws, vulnerabilities, and unforeseen behaviors in an AI system. The Biden Administration also called on the Cybersecurity and Infrastructure Security Agency (CISA) to conduct AI red-teaming for generative AI tools.

What’s more, the executive order instructed the Department of Homeland Security to apply NIST’s standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security were also tasked with addressing AI systems’ threats to critical infrastructure and cybersecurity risks.

“Together, these are the most significant actions ever taken by any government to advance the field of AI safety,” the executive order noted.

To Beth Mosier, healthcare and life sciences director at West Monroe, the red-teaming directives are an area that AI developers and users should keep an eye on, especially in a highly targeted sector like healthcare.

“One of the first things the executive order addresses is this concept of the red team test. And so, it's escalating it from what do you as individuals need to be thinking of, to what do we as a nation need to be thinking of to make sure that when it comes to the highest levels, we're identifying those things that pose risks to national security, public health, and safety,” Mosier noted.

“It’s through that broader lens also that they're going to say, "Here's what the best looks like, can you live up to it? And if not, what's your plan to get there?"

Healthcare security and privacy experts have already raised concerns about the danger of AI-assisted cyberattacks and the potential for HIPAA violations related to AI chatbots.

For example, in July 2023, HHS issued a threat brief about the ways in which threat actors might use AI to exploit vulnerabilities, overwhelm human defenses, and automate attack processes. HHS directed defenders to the NIST AI Risk Management Framework as a tool to mitigate these threats.

The red-teaming activities required by the executive order will also help to reduce risk across AI tools, making them more reliable for end-users across healthcare and other sectors.

On the privacy front, the Biden Administration noted plans to enforce existing consumer protection laws and implement safeguards against fraud, unintended bias, discrimination, and infringements on privacy, all of which have been lasting concerns surrounding AI use in healthcare.

“Such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights,” the executive order continued.

The executive order also emphasized that the federal government would work to ensure that the collection and retention of data is lawful, taking steps to make re-identification harder and mitigating privacy and confidentiality risks in the process.

“People expect privacy when seeking healthcare, and they should not have to give up their privacy in return for receiving care,” Mosier added. “Those are fundamental rights that we as Americans expect, and so it is up to our government to provide oversight and make sure that happens.”

The executive order and other federal guidance shed light on the federal government’s vision for the future of AI governance, which signals an increased focus on decreasing bias and maintaining privacy and security.

For AI developers, that means preparing to document the safety and security of their products. Meanwhile, healthcare organizations can expect to see improvements and refinements in AI technologies in the near future.

To Mosier, the actions outlined in the executive order also align with how the White House has handled governance emerging technologies in the past, signifying a unification of siloed agencies under one common goal.

“This play is nothing new. In terms of bringing medical devices and drugs to market, we rely on the FDA. When we think about national standards in technology, that's NIST. We have standardized bodies who provide oversight and guidance in many areas of our lives, and I think this is no different, it's just an emerging area, a new need,” Mosier said.

“But I think what hopefully they're trying to do is create more of a fabric, if you will, to connect all of those disparate parties so that there's some sense of, okay, we all have a role in this. NIST has a role, FDA still has a role, lots of other bodies have roles, but the bodies generated out of this, and the guidance that are coming out of this executive order will be the ones that provide the fabric to tie everyone together.”

Next Steps

Dig Deeper on Cybersecurity strategies