Getty Images
Diagnostic Artificial Intelligence Models Can Be Tricked By Cyberattacks
Researchers discovered that diagnostic artificial intelligence models used to detect cancer were fooled by cyberattacks that falsify medical images.
Diagnostic artificial intelligence (AI) models hold promise in clinical research, but a new study conducted by University of Pittsburgh researchers and published in Nature Communications found that cyberattacks using falsified medical images could fool AI models.
The study shed light on the concept of “adversarial attacks,” in which bad actors aim to alter images or other data points to make AI models draw incorrect conclusions. The researchers began by training a deep learning algorithm that was able to identify cancerous and benign cases with more than 80 percent accuracy.
Then, the researchers developed a “generative adversarial network” (GAN), which is a computer program that generates false images by misplacing cancerous regions from negative or positive images to confuse the model.
The AI model was fooled by 69.1 percent of the falsified images. Of the 44 positive images made to look negative, the model identified 42 as negative. Of the 319 negative images doctored to look positive, the AI model classified 209 as positive.
“What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue,” Shandong Wu, PhD, the study’s senior author and associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, explained in a press release.
“By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust.”
Artificial intelligence models have become increasingly useful in improving cancer care and early diagnosis. But as with any new technology, researchers should consider cyber risks.
Later in the experiment, the researchers gathered five radiologists to determine whether mammogram images were real or fake. The radiologists identified the authentic images with a varying degree of accuracy, between 29 and 71 percent depending on the individual.
“Certain fake images that fool AI may be easily spotted by radiologists. However, many of the adversarial images in this study not only fooled the model, but they also fooled experienced human readers,” Wu continued.
“Such attacks could potentially be very harmful to patients if they lead to an incorrect cancer diagnosis.”
The sheer volume of data that AI models can maintain makes them a valuable asset to protect, but also an enticing target for threat actors. In addition, clinical researchers and healthcare organizations should consider cyber risks before engaging with a third-party AI vendor.
The researchers are now exploring “adversarial training” for the AI model, which would involve pre-generating adversarial images and teaching the model that the images were falsified. AI models are extremely self-sufficient once they are running, but it is still crucial that humans are overseeing the safety and security of such models. With adequate security practices in place, AI could become part of healthcare’s infrastructure on a larger scale.