Getty Images
AI Achieves High Diagnostic Accuracy in Virtual Primary Care Setting
Providers selected an AI-recommended diagnosis in 84% of virtual primary care cases, demonstrating significant potential to improve patient triage over time.
Diagnostic agreement between providers and an artificial intelligence (AI)-based tool was high within the context of virtual primary care, according to a study published recently in Mayo Clinic Proceedings: Digital Health.
The research evaluated the diagnostic accuracy of an AI tool provided by K Health, a technology company operating a virtual primary care practice within the continental United States. The AI is utilized for patient intake and diagnostic recommendations, the study indicated.
Patients initiate their virtual primary care visit and access the AI via the web or a mobile application. Patients type in their medical concern and share demographic information, which prompts the AI to ask questions about medical history and symptoms.
From there, patients are given a list of possible conditions associated with their symptoms and can choose to see a provider via the virtual primary care platform. At the beginning of this visit, the provider reviews the patient's intake summary and an AI-generated differential diagnosis.
The differential diagnosis is based on the patient’s reported symptoms and offers a maximum of five most likely diagnoses, which are ordered by likelihood, the researchers noted. The virtual care providers then interview each patient by video or text before making a final diagnosis and recommending treatment.
The research team stated that this approach may help providers broaden their scope and recognize less common diseases, but clinicians are instructed to use their own clinical judgment when considering the AI’s suggestions.
However, determining a diagnostic tool’s accuracy is vital to protect patients and ensure that such tools are valuable in the clinical setting. Thus, the researchers set out to assess the performance of K Health’s AI tool.
They conducted a retrospective chart review of 102,059 virtual primary care encounters from October 1, 2022, to January 31, 2023. Patients in the cohort underwent the AI medical interview and provider assessment outlined above.
From these data, the researchers evaluated the AI’s accuracy in terms of agreement between AI diagnoses, virtual care providers, and blind adjudicators. Diagnostic agreement was also analyzed across demographic characteristics, presenting symptoms, diagnoses, and providers’ experience levels.
Following the initial analysis, the model was re-trained and reassessed to gauge performance improvement.
Providers selected the AI’s top-ranked diagnosis suggestion in 60.9 percent of cases, but chose one of the potential five AI-recommended diagnoses in 84.2 percent of cases.
Agreement rate varied depending on diagnosis, with AI and providers agreeing 90 percent or more of the time for 57 diagnoses. However, this agreement rate rose to 95 percent and above when the diagnoses were narrowed to 35.
The average agreement rate for half of all presenting symptoms was greater than or equal to 90 percent overall.
In cases where adjudication was necessary, the consensus diagnosis—reached in 58.2 percent of cases—was always present in the AI’s differential diagnosis.
Diagnostic accuracy varied minimally across demographic characteristics, and provider experience was not found to impact agreement.
Model re-training improved the AI’s performance, increasing diagnostic accuracy from 96.6 to 98.0 percent.
The findings indicate that AI-provider agreement is high in most cases analyzed within the context of the study. The researchers concluded that AI has significant potential to advance patient triage and disease diagnosis in primary care.
The study is part of a spate of research dedicated to evaluating AI’s potential applications in primary care settings.
Last year, researchers showed that an AI-based device can help primary care providers accurately diagnose autism spectrum disorder (ASD) in children up to 6 years old.
The research highlighted that traditional ASD screening relies on the availability of specialists and the completion of time-intensive, team-based behavioral evaluations, which can make the process from initial screening to final diagnosis take up to 18 months.
The researchers indicated that the use of diagnostic aids in primary care can help facilitate ASD diagnosis, leading them to assess the accuracy of one of those tools, an AI-based software-as-a-medical-device (SaMD).
The tool leverages machine learning to make diagnosis recommendations by evaluating each patient for behavioral features predictive of ASD.
The device’s accuracy varied by patient subgroup, but the research team concluded that AI may have significant potential to assist with ASD diagnosis in primary care.