Getty Images

AI project to help older patients understand lab test results

“LabGenie” will use generative AI to contextualize information about lab test results and provide questions patients can use to discuss their results with clinicians.

A research team spearheaded by Florida State University (FSU) faculty has received a $1 million grant from the US Department of Health and Human Services’ Agency for Healthcare Research and Quality (AHRQ) to develop an artificial intelligence (AI) tool to help older patients better understand their lab results.

The tool, called “LabGenie,” is set to utilize generative AI and large language models (LLMs) to provide users with contextualized information about their lab results and generate potential questions that patients can use to guide discussions about those results with their care teams.

The researchers emphasized that the project is a multi-disciplinary, multi-institutional collaboration aimed at enhancing patient education and health literacy to bolster shared decision-making.

“This generous funding will allow the important research being conducted as part of the LabGenie project to have a substantially increased impact on patient involvement in health care decisions and how health care can be improved through information and communication technology incorporating AI,” said Michelle Kazmer, PhD, dean of the College of Communication and Information at FSU, in the news release.

“The project addresses a critical need for better patient engagement by building a patient-facing decision aid that will provide informative visual representations of lab results and tailored question prompts for patients to discuss with their providers,” explained Zhe He, PhD, an associate professor in FSU’s School of Information.

The project consists of two phases.

During the first phase, the research team will work to design and develop a prototype of LabGenie by assessing various lab test visualization approaches and evaluating how best to utilize generative AI and LLMs to tailor information and generate useful question prompts.

In doing so, the researchers hope to build a functional prototype capable of pulling data directly from patients’ medical records to provide personalized, contextualized insights into lab results.

“We want to test what works and what doesn’t work for older adult populations in terms of visualizing and presenting lab test results more effectively,” stated Mia Lustria, PhD, a professor in the School of Information. “We also want to provide patients with more actionable insights about their lab test results by linking them with other personal health information in their electronic health record.”

In the second phase, the research team will measure the effectiveness of LabGenie in terms of how well the tool improves patient engagement and behavioral intentions to participate in shared decision-making by conducting a randomized control trial and a mixed-method study with a cohort of 100 older patients.

The researchers emphasized that the practical applications of LabGenie have significant potential to help at-risk and vulnerable populations, such as geriatric patients with multiple chronic conditions.

“We want to be able to empower older adult patients to be able to better understand their test results and participate in more informed decision-making surrounding their health,” Lustria said.

Other researchers are also exploring how AI tools can bolster patient engagement.

Last year, a research team from the University of Maryland School of Medicine (UMSOM) found that the AI chatbot ChatGPT may be effective for answering patient queries and improving health literacy.

The research assessed the tool’s ability to answer questions about breast cancer screening, and the findings revealed that the chatbot could provide accurate responses about 88 percent of the time.

However, the questions that the tool could not answer satisfactorily highlighted important pitfalls of these tools that must be addressed before they can be deployed as patient-facing healthcare technologies: for one question, ChatGPT’s answer was based on outdated information, and for the other two, the tool’s responses were inconsistent when the same queries were asked twice.

Next Steps

Dig Deeper on Artificial intelligence in healthcare