Getty Images/iStockphoto

CDS EHR Integration Boosts Patient Problem List Documentation

The CDS EHR integration led to a 4.6-fold increase in patient problem list documentation with an alert acceptance rate of 22 percent.

A clinical decision support (CDS) EHR integration improved patient problem list documentation but did not contribute to improvements in clinical quality measures, according to a study published in JAMIA.

Using structured EHR data, researchers developed algorithms to deduce clinical problems for 12 heart, lung, and blood diseases. Then, they implemented a CDS intervention to prompt providers to add missing problems to the list.

The study evaluated the intervention at four healthcare systems using three different EHR systems.

Overall, the intervention led to a 4.6-fold increase in problems added to problem lists.

While the alert acceptance rate (22 percent) was higher than those reported in many other CDS studies, the authors emphasized that the provider did not act for 78 percent of alerts.

"Some of the alerts were likely false positives; however, given the high positive predictive value (PPV) of our alerts identified during early testing, we expected more of them to be accepted," the authors wrote.

They noted that the causes for alert nonacceptance are likely multifactorial. For instance, clinicians may not have read the alerts or may not have thought it was their responsibility to add the problem to the problem list.

"Further, providers receive many other types of alerts in the EHR, and override many of them—these competing alerts may have contributed to alert fatigue, distracting providers from our problem list alerts," the researchers suggested.

The authors noted that alternative strategies to improve problem list documentation might be more effective.

For instance, in a separate study, the researchers found that an intervention where the health system paid residents $1.45 per chart to review patient records and confirm whether a patient had a splenectomy (from a list generated by a splenectomy-detection algorithm) was twice as effective as a point-of-care alert.

The researchers hypothesized that the CDS intervention would lead to an increase in clinical quality measures. However, analysis of the National Committee for Quality Assurance Healthcare Effectiveness Data and Information Set (NCQA HEDIS) clinical quality measures revealed that the intervention did not improve quality.

The study authors proposed several possible explanations the CDS intervention did not improve clinical quality.

"First, even after the intervention, many patients still had problem list gaps," they said. "Second, HEDIS measures may not be an accurate reflection of the true quality of care provided."

Lastly, they noted that the link between problem list documentation and clinical quality might, in fact, not be very strong.

One of the healthcare systems included in the study, Mass General Brigham, has downstream CDS systems related to various HEDIS measures that use the problem list and relevant clinical data to make recommendations.

"If problem list usage increases, this CDS may recognize more patients who have problems and offer more alerts," the study authors wrote. "However, the downstream CDS at MGB had a relatively low acceptance rate, attenuating the possible causal chain from the problem list alerts to better quality through downstream CDS."

Next Steps