Understanding the Role of Chatbots in Virtual Care Delivery
Chatbots are a significant virtual care resource, with use cases ranging from triage to mental healthcare, but there are risks related to their growing use.
As demand for virtual care solidifies, healthcare organizations are increasingly relying on various technologies to deliver care remotely. These include audio-visual technology, healthcare wearables, Bluetooth-enabled devices, and chatbots.
According to a study published in the International Journal of Scientific & Technology Research, chatbots “are automated systems which replicate users behavior on one side of the chatting communication. They are mimic systems which imitate the conversations between two individuals.”
While chatbots have experienced growing popularity over the last few decades, particularly since the advent of the smartphone, their origins can be traced back to the middle of the 20th century.
The first chatbot was constructed in 1966. Called ELIZA, the chatbot simulated a psychotherapist, using pattern matching and template-based responses to converse in a question-based format.
Today, there is a wide range of chatbots that support various types of healthcare processes, from appointment scheduling to checking symptoms to virtually enabled treatment. Here, mHealthIntelligence will take a deep dive into healthcare chatbots, their use cases, and their pros and cons.
CATEGORIES OF CHATBOTS
According to a 2021 article published in JMIR Cancer, there are five categories of chatbots that are suited to healthcare use cases. The categories are based on various criteria, including the type of knowledge they can access, the service they provide, and their response-generation method. A chatbot can belong to more than one category at a time.
1. Knowledge domain. This category is based on the knowledge access range of the chatbots. For instance, some chatbots can respond to broad topics that can be easily searched within databases, while others respond to more complex or specific questions requiring more in-depth research.
2. Service provided. Chatbots can be grouped by the services they provide, like appointment scheduling and airline booking. Sub-categories include inter-personal chatbots, which are “used mainly to transmit information without much intimate connection with users,” intra-personal chatbots, which are “tailored for companionship or support,” and inter-agent chatbots, which are “used for communicating with other chatbots or computer systems.”
3. Goal based. Chatbots can have numerous goals. They can be informative, providing information from databases or inventories; conversational, conversing with users as naturally as possible; or task-based, performing specific pre-determined actions.
4. Response generation. This category is based on the chatbot’s process of analyzing inputs and generating responses. It is divided into rule-based, retrieval-based, and generative sub-categories.
5. Human aided. Some chatbots incorporate human aid in their operations to provide more flexibility in clinical interventions. But, human aid can decrease the speed of the chatbot.
VIRTUAL CARE USE CASES
Chatbots play a critical role in virtual care delivery as they can be deployed in various ways to improve healthcare access and patient experience.
According to a survey conducted by software research firm Software Advice, healthcare providers commonly use chatbots to enable patients to complete administrative tasks, including scheduling appointments (72 percent), requesting prescription refills (66 percent), providing requested data (63 percent), and receiving appointment reminders (62 percent). The survey polled 65 doctors, therapists, or practice owners/founders in March 2023 who use live chatbots on their websites.
However, chatbots can also be employed in clinical care.
One of the most significant clinical use cases for chatbots is patient triage. Chatbots can be designed to gather patient information, such as symptoms, demographics, and medical history, provide insights into possible diagnoses, and connect patients to the appropriate level of care.
About 18 percent of healthcare organizations have invested in online symptom checkers, according to a report by the Center for Connected Medicine. These tools are often embedded in chatbots and used for patient triage. Once the symptom checker has assessed the symptoms shared by patients and other information like their location, they provide suggestions. These can range from at-home care suggestions for mild conditions like the common cold to urging the patient to seek emergency care.
Similarly, chatbots can be used to address social determinants of health (SDOH). In 2021, a team led by the University of Washington developed a chatbot to gather information on social needs among emergency department (ED) visitors. The chatbot guides patients through a social needs survey developed by the Los Angeles County Health Agency. The survey includes 36 questions related to demographics, finances, employment, education, housing, food and utilities, physical safety, legal needs, and access to care.
Additionally, Northwell Health launched a chatbot at the beginning of the year in an effort to lower morbidity and mortality rates among pregnant people. Called Northwell Health Pregnancy Chats, the chatbot provides patient education, identifies urgent concerns, and directs patients to an ED when necessary. Patients also can access health risk assessments, blood pressure tracking, prenatal testing, birth plans, and lactation support through the chatbot. The tool is geared toward pregnant people or those in their first year postpartum.
Chatbots also proved useful during the COVID-19 pandemic. A study published in 2020 identified five key applications of health chatbots during the pandemic: disseminating health information and knowledge; self-triage and personal risk assessment; monitoring exposure and notifications; tracking COVID-19 symptoms and health aspects; and combating misinformation and fake news.
Another significant clinical use case for chatbots lies in mental healthcare. In this arena, chatbots can be used to provide support, guidance, and resources through a conversational interface, a study published in 2023 notes. In particular, there is clinical evidence that chatbots can help address anxiety, depression, and stress symptoms by offering coping strategies, mindfulness exercises, information about conditions and treatments, and connecting users to mental healthcare professionals.
For the study, which was published in JMIR mhealth and uhealth, researchers conducted an exploratory observational study of ten mental healthcare apps with a built-in chatbot feature. They qualitatively analyzed 3,621 consumer reviews from the Google Play Store and 2,624 consumer reviews from the Apple App Store.
They found that the chatbots had three different conversational flows, with ‘guided conversation’ being the most popular. In this conversational flow, users can only reply using preset inputs provided through the interface. The others are ‘semi-guided conversation,’ which allows users to communicate with the chatbot with pre-defined responses and sometimes allows open inputs, and ‘open-ended conversation,’ which enables users to communicate with the chatbot with pre-defined responses and open inputs.
Further, chatbots can offer evidence-based techniques, like cognitive behavioral therapy and dialectical behavior therapy.
Chatbots can also help spur behavior change. In 2019, Nemours Children’s Health System published a study in Translational Behavioral Medicine showing that a text messaging platform integrated with a chatbot helped adolescents remain engaged in a weight management program.
The chatbot, called Tess, offered goal-setting behavioral interactions through prompts and responses designed for adolescents dealing with obesity and pre-diabetes symptoms.
The study included 23 adolescents with obesity symptoms. Over about three months, the patients exchanged 4,123 messages with Tess in 270 conversations. A majority of the study participants (96 percent) said those interactions were helpful.
BENEFITS AND LIMITATIONS
While chatbots can offer various advantages to both patients and providers, there are some challenges related to their use that must be considered.
First, the benefits. Most physicians believe that chatbots are beneficial in scheduling medical appointments (78 percent), locating health clinics (76 percent), or providing medication information (71 percent), a survey that polled 100 physicians shows. The survey was published in the Journal of Medical Internet Research in 2019.
Further, the 2023 Software Advice survey mentioned above revealed that 77 percent of respondents are confident in their chatbot’s ability to accurately assess patient symptoms.
Chatbots can also help medical practices save time, according to the survey. More than half of practices (54 percent) that use chatbots for patient self-scheduling said they save an hour per week for their staff. Overall, 68 percent of practices reported a positive return on investment associated with their chatbots.
Patients, too, have expressed their satisfaction with chatbots. A 2023 study found that chatbots can be effective in treating people with methamphetamine (MA) use disorder. In the study, 50 MA use disorder patients received chatbot-assisted therapy via smartphone, while 49 in the control group received standard care. The chatbot group had fewer MA-positive urine samples than the control group, indicating lower frequency of MA use, reduced severity of MA use disorder, and low polysubstance use.
Of the 29 patients who completed the Mobile Phone Use Questionnaire at the end of the six-month follow-up, 84 percent said they were satisfied with receiving chatbot-assisted therapy.
Additionally, a 2021 review of studies showed that patients’ perceptions and opinions of chatbots for mental health are generally positive. The review, which assessed 37 unique studies, pinpointed ten themes in patient perception of mental health chatbots, including usefulness, ease of use, responsiveness, trustworthiness, and enjoyability.
However, there are widespread concerns regarding chatbot use in healthcare.
For instance, the Journal of Medical Internet Research survey shows that 76 percent of physicians believe that chatbots cannot effectively care for all patient needs, and 71 percent said they cannot provide detailed diagnosis and treatment because chatbots do not have knowledge of all of the personal factors associated with the patient. Seventy-four percent of physicians also said that healthcare chatbots could be a risk to patients if they do not accurately understand the diagnosis.
Other research point to gaps in chatbots’ ability to move the healthcare needle. Researchers tested six mHealth apps targeting dementia and found that they did not meet the needs of patients or their caregivers, according to a study published in 2021. The study revealed that the apps didn’t offer enough content to be attractive or useful, and they weren’t helpful for caregivers.
Then, there are the data privacy and HIPAA compliance risks associated with chatbot use, particularly those tools that rely on AI. Experts writing in JAMA earlier this year noted that asking whether chatbots “could be made HIPAA compliant is to pose the wrong question. Even if compliance were possible, it would not ensure privacy or address larger concerns regarding power and inequality.” They added that deidentified data could still pose data privacy risks via reidentification.
Patients have also not embraced AI-based chatbots wholeheartedly. A survey of 2,000 conducted by the University of Arizona Health Sciences showed that 52 percent preferred consulting with real physicians over AI chatbots. But, importantly, the survey revealed that encouragement from their physicians could help patients overcome their hesitation.
Thus, as chatbots evolve and their use in virtual care delivery increases, growing physician and patient trust in these tools will be critical.