BAIVECTOR - stock.adobe.com

Epic Sepsis Model Predictions May Have Limited Clinical Utility

New study suggests that the Epic Sepsis Model may only identify some high-risk patients after sepsis is clinically recognized, rather than before infection onset.

Researchers from the University of Michigan have demonstrated that the Epic Sepsis Model’s risk stratification accuracy was significantly impacted by whether its predictions were restricted to data collected before a patient met sepsis criteria.

In the study, published this month in the New England Journal of Medicine (NEJM) AI, the research team indicated that artificial intelligence (AI) tools capable of predicting sepsis onset could improve treatment and outcomes. Tools to identify sepsis risk, like the Epic Sepsis Model (ESM), have been developed and deployed in health systems across the United States, but questions about their accuracy and clinical utility have been raised recently.

Wide utilization of Epic’s electronic health record (EHR) system, within which the ESM is embedded, has led to multiple assessments of the sepsis tool’s performance.

In 2021, research published in JAMA Internal Medicine found that the model correctly identified patients at risk of sepsis just 63 percent of the time, a significantly lower accuracy than suggested by the tool’s information sheet.

A study published last year in JAMA Network Open found that Epic’s model outperformed tools like the quick Sepsis-Related Organ Failure Assessment (qSOFA), Sequential Organ Failure Assessment (SOFA), and Systemic Inflammatory Response Syndrome (SIRS), but lacked the timeliness necessary for effective clinical decision support.

The researchers further emphasized that clinicians may recognize and treat sepsis before patients meet the criteria for the condition, raising additional concerns about the plausible clinical benefits of the ESM and other models.

To address this, the research team assessed the performance of the ESM’s predictions made throughout the stages of patients’ hospitalization and treatment.

Data from 77,000 adult inpatients hospitalized at University of Michigan Health from 2018 to 2020 were pulled for analysis, and almost five percent of the cohort had sepsis.

To evaluate the ESM’s predictive performance, the researchers compared the probability that the tool assigned higher risk scores to patients who were diagnosed with sepsis versus patients who were never diagnosed with the condition.

The tool’s ability to predict sepsis before sepsis criteria were met and before indications of treatment plans for sepsis was measured in terms of the area under the receiver operating characteristic curve (AUROC).

The ESM achieved an AUROC of 0.62 when including predictions it made before sepsis criteria were met, and in some cases, following clinical recognition of sepsis. Excluding predictions made after clinical recognition, the AUROC dipped to 0.47.

These performance discrepancies translate to significant numbers of patients not being identified as high-risk. Looking at predictions made by the ESM at all stages of hospitalization, the tool correctly flagged high-risk patients in 87 percent of cases, but this accuracy dropped to 62 percent when the predictions were restricted to before patients met sepsis criteria. When restricted even further to analyze only data from before a blood culture had been ordered, the model’s predictive accuracy hovered around 53 percent.

These findings led the research team to conclude that the tool may use information indicative of clinical recognition of sepsis.

“We suspect that some of the health data that the Epic Sepsis Model relies on encodes, perhaps unintentionally, clinician suspicion that the patient has sepsis,” said corresponding author of the study Jenna Wiens, PhD, associate professor of computer science and engineering at the University of Michigan, in the news release.

If the tool is unintentionally using clinicians’ suspicions about the presence of sepsis, then its predictions are unlikely to provide significant value in terms of clinical decision support.

“We need to consider when in the clinical workflow the model is being evaluated when deciding if it’s helpful to clinicians,” said co-author of the study Donna Tjandra, a doctoral student in computer science and engineering. “Evaluating the model with data collected after the clinician has already suspected sepsis onset can make the model’s performance appear strong, but this does not align with what would aid clinicians in practice.”

While the success of sepsis prediction models has been mixed, researchers are working to address common pitfalls in tools like the ESM.

In January, a research team from the University of California (UC) San Diego School of Medicine shared that their AI model reduced sepsis mortality significantly following deployment in emergency departments.

The tool, known as COMPOSER, uses deep learning to continuously monitor patients as soon as they are checked into the emergency room. The tool then uses 150 real-time variables known to be associated with sepsis, such as lab results and demographics, to predict the condition’s onset.

When patients are flagged as high-risk, care teams are notified via the EHR system. Since implementation, the approach has resulted in a 17 percent reduction in mortality.

Next Steps

Dig Deeper on Artificial intelligence in healthcare