Getty Images/iStockphoto
ChatGPT Provides Accurate Information on Cancer Myths, Misconceptions
ChatGPT provides accurate information when asked about cancer myths and misconceptions, but patients could misinterpret these answers.
In a study published last week in the Journal of The National Cancer Institute (JNCI) Cancer Spectrum, researchers from the University of Utah found that large language model (LLM) ChatGPT gives accurate information when asked about common cancer myths and misconceptions, but these answers could be interpreted incorrectly and negatively impact patient decision-making.
Artificial intelligence (AI)-based chatbots have become popular in recent years across industries to help users find information and navigate online spaces; healthcare is no exception. ChatGPT has recently garnered hype in this area after achieving a 60 percent on the US Medical Licensing Exam-style test, a passing score indicating its potential for use in medical education.
However, the rise of chatbots has implications for patient care and shared decision-making, as patients turn to these online tools to help them gather information related to their health conditions. According to one 2020 study looking at data from the Health Information National Trends Survey (HINTS), researchers found that 80 percent of US adults reported using the internet to seek health information.
The researchers from the University of Utah highlighted that this trend comes with certain risks, as misinformation and harmful information about cancer continue to present a challenge for clinicians and patients. The importance of correct information in this area makes assessing the accuracy of cancer information-related outputs from AI chatbots critical, they posited.
To do this, the researchers evaluated the accuracy of ChatGPT’s outputs compared to answers provided by the National Cancer Institute (NCI) using questions pulled from the NCI “Common Cancer Myths and Misconceptions” web page.
The answers from both ChatGPT and the NCI were blinded — meaning that the five cancer experts reviewing the answers didn’t know which source the responses came from — and were evaluated for accuracy. Reviewers rated the answers based on whether they felt that ‘yes,’ the response was accurate or ‘no,’ it was not.
These ratings were then independently evaluated for each of the 13 questions, and the researchers compared the ratings for the NCI and ChatGPT answers. Following review, the percentage of overall agreement for accuracy was 100 percent for NCI answers compared to 96.9 percent for ChatGPT outputs for all questions.
While the researchers noted that there were few noticeable differences in the number of words or the readability of the answers provided by NCI and ChatGPT, reviewers indicated that ChatGPT’s language could be indirect, vague, and in some cases, unclear.
These shortcomings in ChatGPT’s responses could lead to users and patients interpreting the information incorrectly.
“This could lead to some bad decisions by cancer patients. The team suggested caution when advising patients about whether they should use chatbots for information about cancer,” explained Skyler Johnson, MD, physician-scientist at Huntsman Cancer Institute and assistant professor in the department of radiation oncology at the University of Utah, who helped lead the study, in a press release discussing the findings.
Previous research undertaken by Johnson and his team revealed that cancer misinformation with the potential to harm patients is common on social media, underscoring the need to evaluate how patients are using chatbots and other online tools to seek information about cancer and the quality of the answers given.
“I recognize and understand how difficult it can feel for cancer patients and caregivers to access accurate information,” said Johnson. “These sources need to be studied so that we can help cancer patients navigate the murky waters that exist in the online information environment as they try to seek answers about their diagnoses.”