Blue Planet Studio - stock.adobe

5 pitfalls to avoid when using GenAI for patient engagement

Issues like algorithmic bias and deepening the digital divide are potential pitfalls to using GenAI for patient engagement.

Generative AI has demonstrated potential for enhancing digital patient engagement, but like all technological innovations, there are also pitfalls healthcare professionals will want to avoid to ensure maximum benefits.

Currently, GenAI is proving useful in automating low-level patient-provider communication. Chatbots, online symptom checkers, call center automation and patient portal add-ons are making it easier for patients to connect with the health system without necessarily requiring provider time. This is key as both patient activation and clinician burnout rise.

But digital transformation is hardly ever easy.

Risks for medical misinformation, algorithmic bias, poor transparency and engagement from clinicians and the potential to deepen an existing digital divide loom large. As healthcare professionals consider deploying GenAI for patient engagement, they must also understand the potential pitfalls of the technologies.

1. Medical misinformation, inaccuracies

A patient querying a GenAI chatbot might not get accurate health information depending on the data used to train the bot. At best, this means patients might get outdated information; at worst, it could mean they get medical misinformation.

Certain GenAI tools might have been trained on information from before the United States Preventive Services Task Force made updates to recommended preventive care guidelines, for example. In those cases, patients might get outdated information about receiving a specific preventive screening, which could have consequences for patient care access.

The downstream effects are even worse if the algorithm has been trained on medical misinformation. For example, GenAI that has been trained on false claims about the safety and efficacy of certain vaccines might perpetuate this information and encourage vaccine hesitancy.

Patients themselves are already aware of these risks.

In a December 2023 Wolters Kluwer Health survey of 1,000 adult healthcare users, 49% said they're worried GenAI could produce false medical information. Another 86% said a problem with GenAI is not knowing where the information upon which it's been trained came from or how it was validated. Meanwhile, 82% pointed out that it could be problematic to train GenAI on internet searches that have no filter or vetting.

It's not always possible for patients or their providers to vet GenAI or know how a tool's algorithm was trained. However, clinicians can encourage patients to vet the information that's provided using their own research as well as discuss online research and potential recommendations with their providers.

Some AI tools are promoting information transparency, including telling the user when the system's last informational update was and the risks of taking medical advice from a GenAI chatbot.

2. Algorithmic bias

Perhaps one of AI's biggest challenges is algorithmic bias. It is no secret that nearly all AI contains algorithmic bias. Deploying AI for patient engagement, including patient information-seeking and even risk-stratifying populations for various interventions, could perpetuate those biases.

For example, a biased algorithm could inappropriately segment patients into subgroups based on race instead of clinical need. A May 2023 report in PLOS Global Public Health found that AI algorithms used to screen for and predict type 2 diabetes contributed to over- and underestimations of diabetes risk based on race. The consequences of race-based inaccuracies could include patients not getting enough preventive care and patient engagement.

Likewise, any medical advice issued from GenAI chatbots could be tainted by algorithmic bias.

Hospital and health system IT teams need to continuously monitor the algorithms they employ for bias. What's more, using diverse data sets when training AI models could help circumvent challenges related to algorithmic bias.

3. Lack of transparency

Healthcare hinges on patient trust, and currently, patient trust in AI is about 50/50.

Building patient trust in GenAI will require transparency, particularly regarding how clinicians are using the tools. However, about half of respondents in an Athenahealth/Dynata poll said they don't actually know how the technology is being deployed in healthcare.

To build further patient trust in GenAI, industry leaders can start by being transparent about how they're using the technology.

Disclosures about using GenAI to risk stratify and triage patients, supplement chatbots and symptom checkers and automate the call center might go a long way in helping patients better understand and, ultimately, trust these tools. The industry is also currently debating disclosures of GenAI use in patient portal messaging.

4. Limited clinician oversight

Perhaps one of GenAI's most promising use cases in patient engagement is the tool's ability to respond to patient portal messages.

Studies have shown that tools like ChatGPT can accurately answer patient queries, with a nod to varying health literacy levels to boot.

Meanwhile, other reports have indicated that AI-powered chatbots can even have more empathic interactions with patients than clinicians. This is mostly because clinicians do not usually have the time to express empathy when answering patient portal messages, while empathic responses are automated in AI.

Still, AI-generated patient portal responses are risky. As noted above, patients don't have full trust in chatbots just yet, and transparency can be an issue. Healthcare providers using GenAI and chatbots to lessen their patient portal inboxes should consider the pros and cons of disclosing the use of such tools with their patients.

Plus, patient trust is not the only issue. Because GenAI tools have the potential to peddle medical misinformation or even give ineffective medical advice, healthcare providers need to review their message response before hitting send, according to a group of researchers from Mass General Brigham.

Although human review sounds like good clinical practice when using GenAI to manage the patient portal inbox, it does add some quality assurance complications. For one thing, it's unclear whether GenAI actually saves providers time if the providers are already reviewing the messages.

According to a March 2024 UC San Diego Health report, clinicians said GenAI embedded in patient portal messaging didn't necessarily save them time, but it did reduce their cognitive burden. Conversely, a Mayo Clinic study showed that similar technology did save nurses' time.

Regardless of potential time savings for clinicians, it can be hard for hospitals and health systems to monitor whether clinicians are, in fact, reviewing the material sent using GenAI.

5. Deepening the digital divide

Like healthcare's other technological innovations, GenAI runs the risk of deepening an existing digital divide.

The digital divide is the chasm between those who have access to technologies and the digital literacy to work them and those who don't. Just as there is a digital divide in terms of connected health technologies, GenAI, too, comes with a risk for a digital divide.

There are already signs of a digital divide in terms of AI in healthcare. In the Athenahealth/Dynata poll, patient trust and understanding of GenAI fell along generational lines, with younger patients being more likely to trust and understand the technology than older patients. For example, while 32% of millennials said they trust GenAI in healthcare, only 12% of baby boomers said the same.

Believing in the promise of Gen AI for healthcare also fell along socioeconomic lines, the survey added.

High-income respondents were more likely to say they agree that AI can help solve healthcare's biggest problems than low-income patients. This comes as high-income patients are more likely to have access to digital technologies and possess the digital health literacy necessary to use them.

Ensuring that GenAI doesn't deepen the digital divide will require healthcare professionals to continue discussing the technology with patients and exposing all patients to the tools.

Making assumptions about who might seem likely to use GenAI will quickly become a self-fulfilling prophecy. Instead, healthcare professionals should discuss the technologies with all patients and provide patient education and assistance in using the tools when necessary.

By discussing GenAI with all patients, healthcare professionals can also mitigate other potential pitfalls of the tools, including patient concerns about medical misinformation or transparency.

Although GenAI holds promise in enhancing patient engagement, without safeguards around these potential pitfalls, the technology might also further complicate healthcare's digital landscape.

Sara Heath has covered news related to patient engagement and health equity since 2015.

Dig Deeper on Patient data access