Getty Images/iStockphoto
ACR, RSNA Caution FDA Against Autonomous AI in Medical Imaging
In comments submitted to the FDA, the organizations state that autonomous artificial intelligence in medical imaging could bring significant patient safety concerns.
Despite the promise of artificial intelligence in medical imaging, the FDA currently cannot ensure the safety and efficacy of automated AI in the imaging field, according to comments from the American College of Radiology (ACR) and the Radiological Society of North America (RSNA).
The organizations wrote the comments in response to the FDA’s February 2020 public workshop, the “Evolving Role of Artificial Intelligence in Radiological Imaging,” which highlighted higher risk, autonomously functioning AI.
“While we understand the desire among industry and others to swiftly advance autonomous AI, our organizations strongly believe it is premature for the FDA to consider approval or clearance of algorithms that are designed to provide autonomous image interpretation independent of physician expert confirmation and oversight because of the present inability to provide reasonable assurance of safety and effectiveness,” the entities wrote.
“To date, there is a lack of comprehensive research-based criteria for ensuring algorithms are generalizable and a considerable body of published research to suggest that they often perform poorly across heterogeneous patient populations.”
The authors noted that to date, no autonomously functioning radiology AI algorithms have gone to market. Currently, AI in radiological imaging may only be marketed as a clinical decision support tool designed to assist with specific components of an imaging specialist’s workflow.
The organizations pointed out that in a recent survey of its members, ACR found that only 30 percent of radiologists are using AI in their clinical practice, and many of them are using AI exclusively in research applications.
Additionally, many of these algorithms don’t account for heterogeneity or variability of imaging equipment, ACR found. The survey also showed that 93 percent of radiologists using AI said that results of using the technology were inconsistent, and 95 percent said they wouldn’t use the algorithms without physician overread.
These findings demonstrate the need for FDA to improve regulatory pathways for all healthcare AI to provide reasonable assurance of product safety and effectiveness, the authors stated.
“AI algorithms should be required by FDA to undergo testing using multi-site heterogeneous data sets to ensure a minimum level of generalizability across diverse patient populations as well as variable imaging equipment and imaging protocols,” the organizations stated.
“Additionally, the FDA should have post-market oversight mechanisms that ensure algorithms function as expected longitudinally. Imaging equipment and protocols change rapidly, and the AI algorithms have to maintain an ability to function effectively in this changing environment.”
ACR and RSNA also said that rigorous post-market assessment is critical to ensure that organizations maintain patient safety. To achieve this, the entities recommended that FDA develop requirements for continuous monitoring of all algorithms used in clinical practice. Interpreting physicians should monitor the performance of the algorithms, as well as data regarding patient demographics, type of equipment, and imaging protocols.
The authors also recommended that developers provide clear labeling regarding what equipment and protocols will be supported, and that developers advise that algorithm use be limited to only the devices and protocols that were studied during the validation process.
While RSNA and ACR expressed concerns about using autonomously functioning AI in identifying and ruling out specific diseases, the organizations did note that the technology could have significant clinical benefits for population health management.
Diseases with important clinical ramifications, such as pulmonary emphysema, high body mass index, and osteoporosis, and others can be identified and quantified by AI algorithms in patients undergoing imaging for other reasons, like trauma or inflammatory diseases.
The authors noted that while these conditions are often identified by the radiologists interpreting these images, the results can become buried in reports and the presence of the diseases never makes it into the patient’s medical record. With autonomous AI, this could be avoided.
“We believe that algorithms that can identify and quantify these disease processes and then transmit the information to the patient’s care team via the EHR could have significant positive impact on population health management analogous to autonomous detection of diabetic retinopathy,” the authors said.
“We believe these types of autonomously functioning AI will enhance patient care and potentially save lives if treatment can be instituted early.”
Ultimately, FDA would have to provide more rigorous testing, surveillance, and other oversight mechanisms in order to ensure the safe and effective use of autonomous AI in radiology patient care.
“Before developing pathways for the authorization of autonomously functioning AI in radiological imaging, the FDA should first wait until current AI algorithms have a broader penetrance in the marketplace so that their efficacy and safety in a ‘supervised’ manner can be documented which could then inform decision making regarding the premarket approval and post-market surveillance process requisite for autonomously functioning AI,” the authors concluded.
“If the goal of autonomous AI is to remove the physician from the image interpretation, then the public must be assured that the algorithm will be as safe and effective as the physicians it replaces, which includes the ability to incorporate available context and identify secondary findings that would typically be identified during physician interpretation.”