Getty Images

Value of an Evidence-Based AI Development and Deployment Approach

Taking notes from the evidence-based medicine movement, an evidence-based AI development and deployment approach could revolutionize AI practices in healthcare.

An evidence-based AI development and deployment approach could address systemic issues and help standardize AI practices in healthcare.

"We need a movement for the health AI industry that is analogous to the evidence-based medicine movement in the 1990s," Joachim Roski, principal at Booz Allen, told HealthITAnalytics.

Using the evidence-based medicine movement as a guide, Roski suggested that health AI experts adopt an approach that takes past experience and existing risk and performance assessments into account to inform the creation of standardized guidelines for AI developers and practitioners.

USING THE EVIDENCE-BASED MEDICINE MOVEMENT AS A MODEL FOR AI

Evidence-based medicine is "a systematic approach to medicine in which doctors and other health care professionals use the best available scientific evidence from clinical research to help make decisions about the care of individual patients," according to the National Institutes of Health (NIH).

"A physician's clinical experience and the patient's values and preferences are also important in the process of using the evidence to make decisions. The use of evidence-based medicine may help plan the best treatment and improve quality of care and patient outcomes."

Evidence-based medicine still requires physicians to use their best judgment for their patients, but it also pulls from other disciplines and techniques such as epidemiology and risk-benefit analysis. Physicians can then translate that evidence into treatment plans that best suit their patients.

When the approach took hold in the 1990s, Roski explained, physician associations then translated evidence-based medicine into clinical practice guidelines (CPGs).

"Clinical practice guidelines are statements that include recommendations intended to optimize patient care. They are informed by a systematic review of evidence, and an assessment of the benefits and harms of alternative care options," the American Academy of Family Physicians (AAFP) states.

"CPGs should follow a sound, transparent methodology to translate best evidence into clinical practice for improved patient outcomes. Additionally, evidence-based CPGs are a key aspect of patient-centered care."

After physician groups established CPGs, they created an accountability system and performance metrics to ensure that physicians followed these guidelines.

"What I'm suggesting is we are now in a similar phase with health AI," Roski continued.

"Billions of dollars are being poured into the development of solutions for all kinds of AI applications ranging from patient self-help applications to applications that might assist with diagnostic or treatment decisions."

AI in healthcare is advancing rapidly, but the industry standards and best practices needed to keep these technologies safe and secure have not kept pace.

CURRENT CHALLENGES WITH AI DEVELOPMENT AND DEPLOYMENT

Bias in AI remains one of the most difficult challenges developers and clinicians face. A 2019 study published in Science revealed that a common AI algorithm used to guide healthcare decisions contained evidence of significant racial bias.

The algorithm relied on past health costs as an indicator for future healthcare needs. However, a breadth of evidence supports the fact that Black patients experience disparities in healthcare use and quality in the US. Social determinants of health (SDOH) continue to limit access to care and cause health inequities today.

As a result, the algorithm incorrectly concluded that Black patients were healthier than equally ill White patients, which made the algorithm conclude that Black patients did not need as much care. Using health costs as a proxy embedded bias into the algorithm and led to inaccuracies.

"We now have a situation where there is a great deal of variability in terms of driving AI solutions. And, we also have an increasing number of publications that point out risks and practices that may lead to suboptimal AI solutions," Roski maintained.

"Right now, there are no standards by large for developing AI solutions. It's really up to the entrepreneurs to use their ingenuity, and then for the market to decide if it worked or didn't work."

With an evidence-based AI approach, it would be difficult to overlook the evidence of systemic bias in the US healthcare system while developing and deploying an AI algorithm. Checks and balances would be in place to ensure ethical and evidence-based AI practices.

IMPLEMENTING AN EVIDENCE-BASED AI DEVELOPMENT AND DEPLOYMENT APPROACH

The evidence-based medicine movement may serve as a valuable template for health AI professionals to promote the widespread use of evidence-based AI development and deployment. But successfully implementing these strategies will also require public and private sector collaboration.

"The public sector has an initial opportunity to invest in relevant research," Roski suggested.

Public sector funding could put research money toward developing better AI solutions and developing industry standards and best practices.

The private sector, Roski proposed, has the opportunity to glean information about the current state of research to inform best practices.

"Again, this is analogous to what happened in the 1990s when different physician associations studied relevant research to inform clinical practices," he reiterated.

"Endocrinologists did the same for people with diabetes, and primary care physicians evaluated evidence around preventive care practices. That could also happen in the AI industry, where industry and voluntary associations evaluate the available evidence and try to promulgate it amongst itself."

The US Food and Drug Administration (FDA) also has a role in implementing evidence-based AI development. The FDA said it created its Digital Health Software Precertification (Pre-Cert) Pilot Program to "inform the development of a future regulatory model that will provide more streamlined and efficient regulatory oversight of software-based medical devices developed by manufacturers who have demonstrated a robust culture of quality and organizational excellence."

However, as Roski noted, the FDA only has the authority to assess the safety of diagnostic and treatment products. Products used for research and public health applications are outside the FDA's purview.

Regulatory bodies will need to collaborate with public and private sector entities to develop standards that demand accountability and ensure safety.

"What I think distinguishes the health space from many other areas of applications of AI solutions, be it banking, the finance industry, or the retail industry, is that there is a real research machine that is driven by creating evidence and public publishing on this evidence," Roski pointed out.

With a trove of research and plenty of past mistakes to learn from, AI developers and implementers should, in theory, have enough evidence to develop a better approach to development and deployment.  

"I would encourage AI developers to drive their trade associations to get engaged with this topic and demand the formulation of such standards," Roski advised.

"In the meantime, try to read up on what the most egregious problems have been in the past. Don't repeat the same mistakes that others have already made."

Next Steps

Dig Deeper on Artificial intelligence in healthcare