Getty Images/iStockphoto

Security, Privacy Risks of Artificial Intelligence in Healthcare

As regulatory agencies work to catch up to technological advances, the security and privacy risks of artificial intelligence in healthcare remain.

Artificial intelligence (AI) in healthcare has the potential to revolutionize clinical research and care delivery. AI can be used to sift through massive databases and may have applications in mental healthcare and cancer detection.

“AI in healthcare is increasingly being used with respect to medical devices and other types of technologies and therapeutics,” Linda Malek, partner at Moses & Singer and chair of the firm’s Healthcare and Privacy & Cybersecurity practice groups, said in an interview with HealthITSecurity.

“It's used in all kinds of applications in terms of monitoring things like diabetes, in radiology for identifying different kinds of pathologies and chest x-rays, and in the mental health context in terms of identifying depression.”

Recent research also showed that AI may be able to close gaps and mitigate risks in the healthcare cybersecurity space. AI in cybersecurity is projected to increase at a Compound Annual Growth Rate (CAGR) of 23.66 percent from 2020 to 2027, according to a report from The Economist Intelligence Unit.

Despite all its benefits, AI technologies are arguably developing at a faster rate than regulatory agencies can keep up with. AI vendors in the healthcare space must figure out how to navigate Food and Drug Administration (FDA) regulations, Federal Trade Commission (FTC) guidance, and in the case of business associates, HIPAA compliance.

In the meantime, security and patient privacy concerns remain top-of-mind in the healthcare sector when it comes to AI.

AI Security and Privacy Concerns

“There are multitudes of ways in which AI is particularly useful in the healthcare context. But there are a whole host of concerns in terms of how we regulate AI, particularly because AI is so dependent on gathering large blocks of data in order to learn,” Malek explained.

“When you consider that, you see that there are data privacy and cybersecurity issues, ethical issues, and safety issues.”

The volume of data that AI models can maintain is staggering. Without the proper safeguards and regulatory assurances, AI could pose risks to patient data security and privacy.

When it comes to protected health information (PHI), covered entities have a duty under HIPAA to protect patient data. Engaging with any third-party vendor comes with risks that should be carefully assessed. 

Before entrusting an AI vendor with sensitive data, healthcare organizations should establish business associate agreements (BAAs) to hold vendors to the same rigorous data protection standards.

As AI technology continues to evolve and healthcare organizations continue to integrate AI into daily processes, gaps in the regulatory space continue to put this technology in a gray area.

“Beyond the possibility for general abuses of power, AI poses a novel challenge because the algorithms often require access to large quantities of patient data, and may use the data in different ways over time,” a 2021 study published in BMC Medical Ethics suggested.

“The location and ownership of servers and computers that store and access patient health information for healthcare AI to use are important in these scenarios. Regulation should require that patient data remain in the jurisdiction from which it is obtained, with few exceptions.”

While other emerging technologies may be just as exposed to privacy and security issues in healthcare, AI is vulnerable in a different way, Malek said.

AI technology is not necessarily any more vulnerable to manipulation by bad actors than any other technology. That being said, any network-connected technologies used in conjunction with patient care or data analysis should be carefully guarded.

The sheer volume of data, the ability to re-identify previously de-identified data, and the challenge of navigating through the regulatory landscape make AI a unique risk in healthcare security and privacy.

Regulatory Gaps, Challenges of AI in Healthcare

“There is still a gap in terms of how security and privacy should be regulated in this area,” Malek observed. “There is a patchwork of laws that we apply, but they were not designed for AI. Neither is HIPAA.”

Life sciences and biotech companies are constantly creating innovative technologies that can be applied to clinical operations.

“The problem is that HIPAA does not directly regulate entities that are in this space unless they're acting as a business associate for a covered entity,” Malek stated.

Signing a BAA is a great way to ensure data privacy and security to some degree, but HIPAA does not factor in certain new technologies that could pose significant risks. In addition, vendors can get by without a BAA if the data is de-identified, meaning that it no longer can be traced back to an individual and is subsequently no longer subject to HIPAA.

“But the problem with the HIPAA definition of de-identification is that technology now can re-identify that data pretty easily. And so, the parameters that HIPAA uses for de-identified purposes may not be sufficient to really take into account what we're seeing now in AI and in other technologies that are used in concert with AI,” Malek pointed out.

“De-identification, as it's currently defined, may not be enough to really protect the anonymity of the data. That's an example where HIPAA doesn't really take this kind of technology into account.”

Where HIPAA falls short, other regulations may be able to help. The FTC has recently taken an interest in AI technologies in order to protect consumers from potential privacy risks. In a historic 2019 ruling, the FTC imposed a $5 billion penalty on Facebook for misleading consumers by telling them that they could opt-in to facial recognition technology, even though the setting was on by default.  

“I think the FTC will probably step in to do more in terms of regulation around AI in the privacy context. There needs to be transparency so that the patient understands how their data might be used to promote the development of an AI-driven product,” Malek emphasized.

The FDA is also working toward building out a set of regulations and guidance surrounding AI in relation to the privacy and security of medical devices. In September 2021, the FDA recently released its “Artificial Intelligence and Machine Learning Software as a Medical Device Action Plan,” which provided introductory ideas about how the FDA plans to tackle AI technology regulations.

“The FDA’s traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies,” the document stated.

“In this potential approach, the FDA would expect a commitment from manufacturers on transparency and real-world performance monitoring for artificial intelligence and machine learning-based software as a medical device, as well as periodic updates to the FDA on what changes were implemented as part of the approved pre-specifications and the algorithm change protocol.”

Working together, various regulatory agencies may be able to fill in the gaps and help to implement safeguards that ensure that AI technologies take security and privacy into account, especially when it is used in healthcare.

“Right now, the discussions about AI have been ethical discussions as opposed to regulatory discussions,” Malek stated. “But if a regulatory body like the FDA looks at those questions from a more regulatory standpoint, we'll potentially make more progress.”

Malek presented two best practices that are critically important with respect to AI and healthcare: consent and accountability. These two concepts are key focus areas for the regulatory bodies that are working to create guidelines for AI vendors and should also be top-of-mind for healthcare organizations and AI developers.

Sometimes, data is collected for research purposes and later repurposed for secondary use that the patient may not have consented to. Malek cited a 2016 case in which Google-owned DeepMind, an AI vendor, obtained over a million patient health records without patient consent in the UK.

DeepMind and the Royal Free London NHS Foundation Trust signed a deal in 2015 that provided DeepMind with access to pseudonymized patient data, New Scientist first reported. The agreement was later found to be outside the realm of the UK’s data protection laws.

“Consent is important, but sometimes it is difficult because the contemplated future uses are often not known,” Malek reasoned.

Along with consent, it is equally crucial that accountability is at the forefront of data privacy and security in AI.

“Regulations need to provide guidelines that are clearer for entities that are developing AI so that they know what they need to do to protect the data from a breach and how they will be held accountable if there is a breach,” Malek stressed.

If these regulatory gaps are filled to address the ever-changing technological landscape, privacy and security risks relating to AI could lessen.

Factoring AI Security and Privacy Risks into Your Cybersecurity Program

AI’s benefits in healthcare may largely outweigh security and privacy risks, but it is important for healthcare organizations to still take these risks into account when developing cybersecurity programs and ensuring HIPAA privacy compliance.

“I think that the cybersecurity protection from a technology standpoint has to be commensurate with the advanced technology that's used in AI. In other words, you can't use antiquated cybersecurity measures to deal with very advanced technology like AI and machine learning,” Malek suggested.

“So, I think being as advanced in your cybersecurity protection efforts as you are in your AI development efforts is going to be critically important.”

At the end of the day, healthcare organizations should always prioritize patient safety, security, and privacy. Anything that may pose a threat to those components must be thoroughly evaluated.

Until regulations surrounding AI technology become clearer, healthcare organizations should focus on ensuring transparency, asking patients for consent, and assessing third-party vendors before implementing new technologies in to their infrastructure.

Next Steps

Dig Deeper on Cybersecurity strategies

xtelligent Health IT and EHR
xtelligent Healthtech Analytics
xtelligent Virtual Healthcare
Close