Getty Images/iStockphoto

Promising patient engagement use cases for GenAI, chatbots

Generative AI's strongest patient engagement use cases center on reducing staff and clinician burden while streamlining the patient experience.

The healthcare industry is in the throes of its digital transformation, including numerous patient engagement use cases for generative AI and chatbots.

Technologies including online symptom checkers, online appointment scheduling and patient navigation, medical search engines and even patient portal messaging are all key test cases for GenAI and patient-facing chatbots. AI-fueled enhancements to these patient engagement technologies promise to reduce staff burden while streamlining the patient experience of care, some experts say.

However, even patient-facing use cases for GenAI are not immune to the tool's greatest potential pitfalls, like the generation of medical misinformation or biased algorithms.

As healthcare experts across the care continuum consider the patient engagement use cases for GenAI and chatbots, they must also consider fail-safes to keep the technology from promoting medical falsehoods and carving out disparities.

Online symptom checkers

Online symptom checkers give healthcare organizations the opportunity to assess and sometimes assuage patients' medical concerns without a visit to a healthcare provider.

These tools leverage AI chatbot functionality by allowing patients to input their symptoms and producing a list of likely diagnoses. From that list, patients can determine where they should access care (the urgent care versus the emergency department, for example) or identify how to best ride out symptoms on their own, like in the case of displaying cold symptoms.

These technologies hold a lot of promise for improving the patient experience and operational efficiency. If a patient using an online symptom checker can determine that they can alleviate their symptoms at home, they could be saved a visit to a healthcare provider.

From the clinic or hospital perspective, online symptom checkers can help triage patients to the right place, keeping high-acuity settings available for individuals who need more intensive care.

But despite that promise, the data shows that online symptom checkers have mixed efficacy.

In 2022, a group of researchers published a literature review showing limited diagnostic accuracy for online symptom checkers. Depending on the study in the literature review, diagnostic accuracy ranged from 19% to 37.9%.

The tools did perform better in terms of triage accuracy, or referring patients to the most appropriate care setting. According to the researchers, triage accuracy oscillated 48.9% and 90%.

Patient reception to these tools is also middling.

For example, patients had a tepid response to the COVID-19 symptom checkers that became commonplace during the pandemic's initial outbreak, according to one 2022 analysis.

Set up to help worried patients determine whether their symptoms were typical or if they potentially had the novel virus, these tools held a lot of promise for keeping patient volumes down at over-stressed hospitals.

Although patients were open to using these tools, there were some strings attached, the 2022 study showed. Patients liked the chatbots, but only if they appeared competent and displayed human-like qualities. When perceived as equal in quality to human interaction, patients liked the tools.

Other studies indicate that online symptom checkers could further health inequities. Folks who use these tools tend to be younger, female and have higher digital health literacy, posing a risk of creating a steep digital divide.

As AI developers continue to produce and improve these tools, creating fail-safes to make them amenable to all patients will be key. Efforts should include offering chatbots that can communicate in multiple languages, ones that can easily replicate human interactions and ones that can quickly and easily refer patients to a human when requested.

Self-scheduling, patient navigation

GenAI and conversational AI have shown promise in helping to address lower-level patient queries that have usually fallen to healthcare professionals. This is good news for today's pressed medical workforce, who would prefer to exercise their clinical expertise over appointment scheduling or patient navigation.

Integrating GenAI and chatbots into online appointment scheduling systems has helped fill in navigation gaps. Chatbots can help patients obtain their appointments while also answering questions like where patients can park at the clinic or where they can get directions to a certain department in the clinic.

One December 2023 literature review showed that conversational AI can help enhance the appointment scheduling and patient navigation process.

"AI-optimized patient scheduling can decrease the burden on provider time, increase patient satisfaction, and ultimately provide more patient-directed health care and efficiency, but barriers to implementation of these models must be understood," the researchers wrote.

Key considerations for implementing conversational AI into appointment scheduling tools include health equity, access to broadband and patient trust.

Still, using AI during the appointment booking process might hold more promise than peril. While organizations need to make sure their technologies can communicate in different languages and are accessible to all populations, generative AI is a promising option for organizations balancing an overburdened staff.

Leaving low-level patient requests, like booking an appointment slot or asking for directions to the clinic, can be well managed by conversational AI. In turn, staff can be left to handle high-level issues, like insurance queries.

Online medical research

ChatGPT and chatbots are adding to the Dr. Google phenomenon, giving patients an avenue for querying the internet about their symptoms before meeting with a healthcare provider.

Although some clinicians have previously been wary of online medical searches, the data shows ChatGPT and the like are somewhat effective at giving patients accurate medical information.

In one April 2023 test of the GenAI tool, ChatGPT proved effective at giving laypeople information about breast cancer screening. Specifically, the tool answered questions about breast cancer screening correctly 88% of the time. In another similar study from May 2023, researchers found that the tool can provide adequate patient education about preparing for a colonoscopy.

Notably, both studies showed that this information conveyed by the AI chatbot was understandable, which is key considering healthcare's priority on patient health literacy.

Still, these tools are not infallible, and patients know it.

These types of chatbots are trained on online information and their algorithms often contain bias. These issues can make it hard for patients to trust the information issued by AI chatbots.

In a 2023 Wolters Kluwer survey, 49% of respondents said they were worried GenAI could produce false medical information.

Another 86% it's problematic that they don't know the sources for the information upon which GenAI has been trained or how the information was validated. Another 82 pointed out that it could be problematic to train GenAI on internet searches that have no filter or vetting.

Validating the source material for chatbots and generative AI will be a key imperative for building patient trust in the tools. Additionally, providing other authoritative informational sources, like patient education materials, could help supplement patients' information seeking.

Patient portal messages, provider communication

Finally, generative AI finds a key use case in patient portal messaging. When embedded into the patient portal, the technology can assess patient messages and generate a response on behalf of the healthcare providers. This could be key for reducing clinician burnout.

Numerous studies have indicated that chatbots and generative AI are effectively used in the patient portal. In April 2024, a group of researchers from Mass General Brigham found that a large language model implemented in the patient portal's secure messaging tool generated acceptable answers to example patient queries.

In some cases, the expert clinicians reviewing the chatbot responses thought the text had been written by a clinician. Chatbot responses needed human editing 58% of the time.

Separate data has shown that chatbot-generated patient portal messages can be more empathic than those written by healthcare providers. Researchers posited that this occurs because healthcare providers are overworked and do not have the time to write responses signaling empathy.

Still, healthcare organizations cannot rely on chatbots alone to answer patient portal messages. Regardless of the complexity of the patient query, the Mass General Brigham researchers advised that every AI-generated response should be reviewed by a healthcare provider before being sent. This gives healthcare providers the opportunity to correct potential mistakes.

GenAI has proven useful in other areas of healthcare, including clinical decision support and ambient documentation. In addition to these patient engagement use cases, the technology is demonstrating promise in improving healthcare efficiency. However, technology developers and healthcare providers should be sure to avoid the pitfalls of AI in healthcare, including algorithmic bias.

Sara Heath has covered news related to patient engagement and health equity since 2015.

Dig Deeper on Patient data access

xtelligent Virtual Healthcare
xtelligent Rev Cycle Management
xtelligent Healthtech Analytics
Close