Yuichiro Chino/Moment via Getty

Researchers Outline AI-Related Clinical Competencies for Health Professionals

Researchers at Vanderbilt University Medical Center and IBM Watson Health have defined guidelines for how health professionals might best prepare themselves to evaluate and work with artificial intelligence.

A research report published in Academic Medicine outlines artificial intelligence (AI)-related clinical competencies for healthcare professionals in an effort to address practical and ethical concerns raised by the growing interest in and adoption of these tools in clinical settings.

Efforts to create responsible health AI and define guidelines for use have become more common in response to these concerns, but many are focused on the developers of the algorithms. By focusing on the clinician’s side of this issue, the research team highlights the shared responsibility of ethical health AI use among various healthcare stakeholders.

“We started with a formal literature review of the health professions education and informatics literature, finding that there’ve been numerous calls for a list of AI-related competencies, because of the complexity of the tools and the risk of misuse and unintended consequences,” said Bonnie Miller, MD, professor of medical education and administration at Vanderbilt University Medical Center (VUMC), in the press release. “But we could find no list of competencies, and very scant mention of how clinicians were being trained to use AI-based tools as they’ve begun entering into testing.”

To fill this gap, the researchers selected 15 subject matter experts from across the US and conducted semistructured interviews focused on the use of AI-based tools in healthcare settings.

“In interviews our subject experts expressed a mix of optimism and caution,” noted Regina Russell, PhD, assistant professor of medical education and administration at VUMC, in the press release. “They recognize the great potential of these new technologies to support health, but the need for caution was repeatedly expressed around issues of bias and fairness as AI-based tools are rolled out.”

“It’s well understood that AI-based tools can exacerbate biases present in the data used to train them,” she continued. “With health disparities of all sorts having long been baked into our health system and our society, it’s clear that health professionals will need to develop baseline knowledge, skills and attitudes to work appropriately with these tools.”

Transcripts were coded, thematically analyzed, developed into draft competency statements, and provided to the experts for feedback. The draft statements were then finalized using a consensus approach among the eight members of the research team.

From this process, six competency domain statements and 25 subcompetencies were formulated.

The six competency domain statements are basic knowledge of AI; social and ethical implications of AI; AI-enhanced clinical encounters; evidence-based evaluation of AI tools; workflow analysis for AI-based tools; and practice-based learning and improvement regarding AI-based tools.

This report is the latest effort to create guidelines for responsible and ethical AI use in healthcare.

Last week, the Coalition for Health AI shared progress and published papers defining core principles of responsible AI testing, usability, and safety as part of its work on mitigating bias and promoting equity.

Earlier this month, the White House unveiled its Blueprint for an AI Bill of Rights, which outlines five protections Americans should have in the face of the rising use of these technologies.

Next Steps

Dig Deeper on Artificial intelligence in healthcare