BAIVECTOR - stock.adobe.com

Health AI industry left to self-regulate as feds change course

Even as Trump rescinded AI executive orders, health AI stakeholders continue to self-regulate, creating guidelines and frameworks for safe and responsible health AI development.

On his first day as the 47th president, Donald Trump rescinded numerous executive orders from the Biden era, including one mandating trustworthy AI development and deployment. Within the first 24 hours, it became evident that the new administration wanted to take a different approach to AI regulation.

While the approach appears to favor less government regulation of AI across industries, including healthcare, experts emphasize that ensuring safe and effective health AI use remains in the hands of the stakeholders who wield the technology, regardless of the government's stance.

The healthcare industry is already creating frameworks to drive safe and responsible AI, mainly because there is no "market incentive to develop an AI that is going to hurt people," Amy Worley, fractional data protection officer and managing director at consulting firm BRG, pointed out.

Several collaboratives have formed amid AI's rapid ascent to support responsible health AI development, including the Coalition for Health AI (CHAI) and the Trustworthy & Responsible AI Network (TRAIN).

However, being largely left to regulate itself means that the healthcare industry must continue to develop guardrails and reduce risks related to AI use in healthcare.

Understanding the Biden EO & what its rescission means

In 2023, former President Joe Biden issued the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence executive order. The EO outlined eight guiding principles and priorities for AI development, including ensuring AI safety through reliable and standardized evaluations, investing in AI-related education and training, and mitigating bias in AI systems.

Healthcare law experts previously shared with Healthtech Analytics that the EO provided a framework for creating standards, laws and regulations around AI across industries, encouraged information-sharing and collaboration between federal agencies to establish a foundation for AI regulation and urged government agencies to work with the private sector to support responsible AI development.

What I'm telling private industry is that NIST was really the way to go under the Biden EO, and it's still the way to go under what we know right now from Trump's statement in the January 23rd EO.
Amy WorleyFractional data protection officer and managing director, BRG

However, they also noted that the EO did not have a strong enforcement mechanism, which is why its rescission will not significantly impact the healthcare industry.

"The executive order in the healthcare delivery space really didn't affect the development or deployment of models," said Brian Anderson, M.D., co-founder and CEO of CHAI.

He added that the EO was meant to spur an internal effort across federal agencies to develop an AI strategy based on the principles and priorities detailed. Given how quickly the health AI landscape is changing, he said, the fact that the EO didn't result in enforceable regulations is probably a good thing.

"The challenge when you create static regulations is that it doesn't keep pace with the capabilities and the new methodologies of how models are trained and developed," Anderson said.

With rapid advancements in health AI, including the emergence of genAI and agentic AI tools, Anderson noted that the private sector first needs to develop a consensus on safety and efficacy before it can share best practices with policymakers to inform regulations for the fast-changing landscape.

Worley echoed Anderson, noting that the EO "really was still a work in progress." As such, its rescission doesn't fundamentally change health AI development and adoption. Even without the EO, the industry has other frameworks and guidance to rely on, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework.

The framework, developed in collaboration with the private and public sectors, aims to increase AI trustworthiness and ensure responsible AI design, development, deployment and use. It is voluntary, not specific to any one sector and use case agnostic.

"What I'm telling private industry is that NIST was really the way to go under the Biden EO, and it's still the way to go under what we know right now from Trump's statement in the January 23rd EO," said Worley.

Further, the U.S. FDA finalized guidance to streamline the process for approving modifications to AI- and machine learning (ML)-enabled medical devices in 2024. Though the guidance is nonbinding, it offers AI developers and users guardrails to ensure safety and efficacy as tools evolve.

"The FDA is very mindful that they don't want to put regulations in place that would slow down the innovation of a drug or a device that could really positively impact human health… And I think regardless of what had happened with the election, we would continue to see that iterative process unfold," Worley said.

The FDA had approved nearly 1,000 AI- and ML-enabled medical devices as of December 2024.

Self-regulating health AI development

With the Trump administration's emphatic pro-business stance and overarching push to reduce regulations across industries, health AI developers and provider organizations seeking to use AI must regulate themselves.

Luckily, the healthcare industry already has significant guardrails, such as existing patient safety rules and guidelines, to support safe AI use.

"An appropriate-level provider still has to be making the decision," Worley pointed out. "They can be informed by AI, but those other laws that we already have about safe products, products that are safe for consumers, still apply. The professional regulation of healthcare providers still applies."

One of the core efforts in CHI is building a consensus framework for what are the minimum standards in disclosure for a model developer to share about how the model was trained.
Brian Anderson, M.D.Co-founder and CEO, Coalition for Health AI (CHAI)

The healthcare community is also organizing itself into collaboratives and working groups to advance health-specific AI governance.

For example, Worley noted that MIT maintains a public database of research papers on AI risks, which works well with the NIST framework.

"I think what I'm seeing is a trend that these frameworks have been developed with a lot of industry [input], and even though they are voluntary, the market is treating them as the baseline sort of table stakes for playing in this space," she said.

CHAI is one such collaborative focused on creating best practice frameworks and providing quality assurance resources to the industry.

"One of the core efforts in CHAI is building a consensus framework for what are the minimum standards in disclosure for a model developer to share about how the model was trained," Anderson said. "So the training methodology as well as the data sets used to train it."

Another CHAI resource is its Applied Model Cards, which are intended to function as nutrition labels for AI tools. These labels provide information on the datasets the solution has been trained on as well as the limitations of its use.

"And so that kind of transparency informs multiple doctors, nurses and patients to have the ability to make more informed decisions on whether or not they want to buy it, whether or not they want to use it," Anderson explained.

These types of resources, frameworks and guidelines developed by health industry groups will be critical to preventing AI-related harm while the federal government decides its strategy for regulating health AI.

However, the industry's ability to create its own guidance and guardrails does not mean that the government has no role to play. Worley believes that binding federal regulations would help propel the health AI industry.

"The way that I explain it is just artists work better with constraints," she said. "The same can also be true in a free market economy. If you sketch out what the constraints are, you can still have a lot of innovation and competition inside those constraints."

It remains to be seen whether the Trump administration will issue binding regulations. But until then, the health AI industry appears committed to learning and self-regulating as it grows.

Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics. 

Dig Deeper on Healthcare policy and regulation