stnazkul - stock.adobe.com
ChatGPT Continues to Prove Useful for Patient Education
The AI chatbot, ChatGPT, can issue patient education about colonoscopy effectively and using language that is easy for patients to understand.
More evidence is emerging that ChatGPT, a conversational generative artificial intelligence (AI) chatbot, can effectively answer patient questions, this time out of Massachusetts General Hospital.
This data gives credence to ChatGPT and AI chatbots for patient education, the researchers indicated.
Only released in November 2022, ChatGPT has taken the world by storm by introducing novel uses for conversational AI. In healthcare particularly, stakeholders have debated how the technology can augment how healthcare information is delivered.
This latest study showed that generative AI can be effective for delivering patient information by way of answering patient queries, with the MGH researchers finding that answers given by ChatGPT are even clearer than those provided on hospital websites.
The researchers looked specifically at how ChatGPT can answer patient questions about colonoscopy. The team pulled eight common questions from the FAQ section of three randomly selected hospitals from the US News & World Reports list of the top 20 hospitals for gastroenterology and gastrointestinal surgery. They then entered those questions into ChatGPT twice using unique chat sessions.
Then, the team tapped a group of four gastroenterologists to review both the ChatGPT and the website answers for ease of understanding, scientific adequacy, and satisfaction with the answer.
On the whole, physician reviewers found the ChatGPT answers entirely adequate. Using a plagiarism checker, the team found that ChatGPT answers were not similar to the answers provided on hospital websites, and they were also largely dissimilar to each other.
Nevertheless, the physician reviewers said the ChatGPT answers were similar in quality to the health information presented on hospital websites. Both ChatGPT and hospital websites received similar scores for scientific adequacy and satisfaction with the answer.
And while the researchers didn’t note a stark difference in ease of understanding, the ChatGPT answers did get slightly higher average rankings.
The researchers emphasized that use of ChatGPT in patient engagement is in its infancy and that it is too early to draw conclusions, but this preliminary data does indicate that chatbots could help fill in patient education gaps amid a worrisome provider shortage.
“Especially in the current era of shared decision-making and consumerization of healthcare, patients have been actively consuming MI through multiple channels and accessing providers through electronic patient portals in an exponential magnitude, which has the potential to benefit patients but simultaneously represents a heavy burden for providers and staff,” they wrote in the study.
“We envision that AI-generated MI, with appropriate provider oversight, accreditation, and periodic surveillance, could improve efficiency of care and free providers for more cognitively- intensive patient communications.”
But there are still some potential pitfalls, the researchers added. For one thing, ChatGPT is still trained on the internet, and there is the possibility that medical misinformation makes its way into ChatGPT answers.
And even though the ChatGPT answers had higher average ease of understanding scores, health literacy is still a problem, the research team said. The AI-generated answers were more understandable according to the physician reviewers, but they still exceed the eighth-grade reading level experts suggest health information be communicated in.
This study follows a similar one focused on using ChatGPT to answer patient queries about breast cancer screenings. The study out of the University of Maryland School of Medicine found that ChatGPT could accurately answer patient queries 88 percent of the time. Similar to the MGH study, the researchers found ChatGPT answers took patient health literacy into consideration.
What’s more, researchers are finding that ChatGPT can be a more empathic communicator, largely because it does not experience the workload burden that providers do. An assessment of ChatGPT answers and physician answers posted to a Reddit forum showed that ChatGPT answers tend to empathize with the patient more often than providers do.
Researchers indicated that the pronounced empathy in ChatGPT answers isn’t because people are bad doctors, but because people do not have the time to add empathy when addressing patient messages.