Rawpixel.com - stock.adobe.com
FDA Releases AI/ML-Enabled Medical Device Modification Guidance
The FDA released new guidance proposing a science-based approach for AI/ML-enabled medical devices to be modified and improved more quickly.
The Food and Drug Administration (FDA) released new draft guidance last week proposing an approach to ensure that artificial intelligence (AI) and machine learning (ML)-enabled devices can be safely and rapidly modified in response to new data.
The guidance, titled ‘Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions,’ aims to “further develop a regulatory approach tailored to artificial intelligence/machine learning (AI/ML)-enabled devices to increase patients’ access to safe and effective AI/ML-enabled devices, in order to protect and promote public health.”
The approach is designed to support small, continuous improvements in machine learning-enabled device software functions (ML-DSF) while ensuring the safety and effectiveness of these technologies, in line with the FDA’s 2019 proposed regulatory framework for AI/ML-based Software as a Medical Device (SaMD) modifications and the resulting 2021 AI/ML Action Plan.
The guidance also provides recommendations related to the Predetermined Change Control Plan (PCCP) included in marketing submissions for ML-DSF devices. PCCPs would include detailed descriptions of planned ML-DSF modifications, explanations of the methodology to implement and validate planned modifications, and assessments of the impact of any modifications.
The press release published alongside the draft guidance further indicates that under the FDA’s proposal, vendors would also be required to describe how necessary information regarding planned modifications would be clearly communicated to device users.
The guidance proposes that these changes apply to not only AI/ML-enabled SaMDs but all AI/ML-enabled device software functions, including those that are part of or control hardware medical devices.
Brendan O'Leary, deputy director of the Digital Health Center of Excellence in the FDA's Center for Devices and Radiological Health (CDRH), explained that the draft guidance is also part of the CDRH’s commitment to advancing health equity.
“The approach FDA is proposing in this draft guidance would ensure that important performance considerations, including with respect to race, ethnicity, disease severity, gender, age, and geographical considerations, are addressed in the ongoing development, validation, implementation, and monitoring of AI/ML-enabled devices,” O’Leary stated in the press release.
He also noted that the approach would give healthcare providers and other users faster access to safe, effective advancements in these devices, speeding up US medical device innovations and driving precision medicine.
The FDA is accepting comments on the draft guidance through July 3.
This draft guidance follows increased interest in regulating AI-based medical devices and tools spurred by technological advances in recent years.
In September, the FDA recommended that some AI-drive clinical decision support (CDS) tools be regulated as medical devices.
Some of the tools subject to the guidance would include devices to forecast sepsis, flag patient deterioration, predict heart failure hospitalizations, and identify patients who may be addicted to opioids.
These recommendations were driven by the FDA’s recognition that traditional premarket review processes for medical devices were not designed for adaptive technologies such as AI and ML and that these may necessitate a total product lifecycle-based regulatory framework.
The FDA’s efforts come amid increased scrutiny from providers and medical researchers, many of whom argue that the current regulatory approach for AI/ML-based medical devices is insufficient.
Researchers argued last year that this is the case with the national, centralized regulation of clinical AI. Instead, they proposed a hybrid model of centralized and decentralized regulation.
Under their proposal, decentralized regulation would be the default for most clinical AI applications, and centralized regulation would be used for the highest-risk tasks, which include those for which inference is entirely automated without clinician review, those with a high potential to negatively impact patient health, or those designed to be applied on a national scale.
They posited that such an approach, while not currently feasible, could address the challenges created by the increase in clinical AI applications and the need to adapt applications to account for differences between local health systems.
Editor’s Note: This article was updated to correct a typo at 12:35pm ET on July 13, 2023.