How health systems are building clinician trust in AI
Mayo Clinic, Vanderbilt University Medical Center and Duke Health leaders highlighted strategies to foster trust in AI tools and support successful implementation efforts.
The AI buzz is alive and well, with the technology expected to remain integral to health systems' IT plans in 2025. As health systems strategize about how best to identify and integrate AI tools, they must ensure that their clinical staff are on board. Without support from these critical stakeholders, AI implementation efforts could be dead in the water.
A recent AMA survey revealed that physician enthusiasm about health AI is on the rise overall. Among 1,183 physicians polled, the share of physicians whose enthusiasm exceeded their concerns with AI increased from 30% in 2023 to 35% in 2024. However, most physicians noted that a designated feedback channel (88%) and data privacy assurances (87%) are critical to facilitating AI adoption among clinical teams.
Provider trust in AI is also critical because provider support is closely tied to patient trust. A study published in 2023 showed that though patients were almost evenly split when asked whether they would prefer a human clinician or an AI tool, they were significantly more likely to accept the use of the AI tool if they were told their provider supported the tool's use and felt it was helpful.
AI adoption and utilization begin with clinical teams. Leaders from some of the country's top health systems told Healthtech Analytics about their clinicians' most pressing concerns with AI and how they mitigated them to ensure successful AI implementations.
Health systems grow AI efforts, but clinician concerns linger
Though clinician concerns regarding AI use in healthcare vary, one common concern is AI model accuracy.
When Vanderbilt University Medical Center began implementing AI tools, clinicians had a host of questions for leadership, said Adam Wright, PhD, professor of biomedical informatics and medicine at VUMC and director of the Vanderbilt Clinical Informatics Center.
The health system is using AI, including deep learning and neural network-based systems, to enhance clinical decision support and ease administrative burdens. AI use cases in the provider organization include sepsis management, predicting patient deterioration and capacity management. The system is also exploring generative AI tools to help draft messages to patients or appointment summaries.
But, before AI tools were implemented for the above use cases, clinicians wanted information about the algorithms and models involved.
"How accurate are they? Were they trained on Vanderbilt data?" said Wright. "Do they represent the kinds of patients that we see? People were also really interested in transparency of the models. So, can I understand who developed the model, how it was developed, how up-to-date it is?"
At Duke Health, clinician concerns also centered on AI safety, efficacy and equity, said Eric Gon-Chee Poon, MD, chief health information officer for Duke Medicine and primary care internal medicine provider at the Durham Medical Center.
Duke Health has integrated AI into various clinical areas, including sepsis management. The health system has developed a sepsis algorithm that alerts clinicians when a patient is at risk of developing sepsis.
"Today, all the patients in our emergency room benefit from this algorithm in the background," Poon explained. "You can think of that as a guardian angel that is watching over every single patient."
The input needs to be at the right time at the right place, and it's going to actually influence decision making.
Eric Gon-Chee Poon, MDChief health information officer, Duke Medicine, and primary care internal medicine provider, Durham Medical Center
The algorithm determines if a patient is at high risk of developing sepsis and sends an alert to a rapid response team, which then intervenes, he said.
In addition, the health system is using AI to enhance operations, such as operating room scheduling, clinical documentation and revenue cycle management. With AI spreading across the health system, Duke Health leaders had to ensure clinicians understood the technology, including its potential limitations.
In addition to AI accuracy, clinicians raised concerns about workflow integration.
Wright underscored that clinician buy-in rests on setting up workflows where the clinician gets the right suggestion at the right time.
"If we show an accurate suggestion at the wrong time in the workflow, it's totally worthless," he said. "And so figuring out who needs to see the result of an AI tool, when do they need to see it, what actions might they take on it, how do we make those actions as easy as possible? That is, in my opinion, even more important than getting the model right. It's getting the workflow right."
Workflow integration requires a nuanced approach. Mark Larson, MD, a practicing gastroenterologist and medical director for the Mayo Clinic Platform -- a Mayo Clinic division focused on using technology and data to enhance healthcare -- noted that physicians working in different specialties and with varying levels of expertise will have varied experiences with AI.
"Someone in primary care might have a very different experience with an AI tool than someone with subspecialty expertise because the limitations of AI are what data set it has trained off of," he said. "And there may be experts or specialists that have a database of knowledge that an AI tool just can't compete with. On the other hand, there may be entry-level providers, residents, students who have limited experience with patient decision-making, where the AI tool might be extremely helpful."
Mayo Clinic is "carefully and cautiously" dipping its toes into AI, Larson added. The health system is examining how AI tools could ease repetitive administrative tasks in addition to enhancing clinical decision-making.
Alleviating clinician concerns is critical for health systems that aim to grow AI use within their facilities. However, a one-size-fits-all approach is likely not the answer. Health system leaders must develop diverse approaches to gain and maintain clinician trust in AI.
Addressing clinician concerns and maintaining trust in AI
Building clinician confidence in AI tools and approaches begins with helping clinicians understand the tool and how it was developed. But, the technology must first undergo a comprehensive assessment to give clinicians the assurance they need regarding data accuracy.
At Mayo Clinic, clinician working groups are tasked with this assessment.
"Mayo Clinic gets a working group or task force to study [the AI tool] very carefully to validate it, to make sure it does what it's supposed to do and then to give it the approval, 'yes or no,' based on that," said Larson.
This can help leaders answer clinicians' questions about the technology as soon as it is introduced, which can go a long way toward building trust in the tool.
It becomes, in some ways, a solution in search of a problem. We've got this solution now; what's the problem we're going to apply it to? We've had much more success when we do it in the other direction.
Adam Wright, PhDProfessor of biomedical informatics and medicine, VUMC, and director, Vanderbilt Clinical Informatics Center
Almost every Mayo Clinic division and department has an AI working group that explores how AI can help that part of the health system. This enables the health system to customize AI adoption on a department basis. Larson noted that some groups, like neurosurgery, may have limited applications where AI would be helpful, while others, like primary care, may present more opportunities for AI.
Duke Health takes a similar approach, piloting AI tools with progressively larger groups of clinicians who provide feedback.
"That's an opportunity when you put the technology into the hands of a group of clinicians," Poon said. "What do they think about it and how does it fit in the workflow? And is there anything that's concerning, any of the imperfections that are potentially dangerous, if they are not caught?"
He further emphasized that tools may look great in the marketplace, but health systems need to embrace the idea that not all tools will work as advertised. The AI adoption process requires trial and error, and leaders must be transparent with their clinicians about AI's potential limitations.
For example, ambient AI technology is increasingly applied to ease clinical documentation burdens.
"The technology is not perfect," Poon said. "[The notes] that come back absolutely need to be reviewed, but, in general, for most clinicians saving so much time -- folks understand in spite of the limitations, it is improving the quality of care and the personal lives [of clinicians]."
While transparency in the AI adoption process can spur greater trust among clinicians, leaders must also ensure that AI adoption is thoughtful. Mistrust can arise when clinicians feel leadership is chasing the next shiny object in the health tech world.
"I think one of the mistakes that we sometimes make is having a new method we're really interested in or a new model that we found or that a vendor presented to us or that somebody kind of brought to us," Wright said. "And it becomes, in some ways, a solution in search of a problem. We've got this solution now; what's the problem we're going to apply it to? We've had much more success when we do it in the other direction. So, we try to look around and say, what are the quality issues or safety issues or workflow issues in the hospital? And then how could we deploy AI to improve those issues?"
He added that clinicians can play a critical role in helping leaders identify those issues, which is why it is important to include them in early discussions on AI adoption.
Bringing clinicians into the AI adoption process early can also help health system leaders develop adequate training resources and ensure seamless integration into clinical workflows. Poon echoed Wright, pointing out that if the AI tool is unable to provide the right information at the right time to the right person in an easily accessible way, clinicians can grow frustrated.
"If you don't obey the tried-and-true axioms of how you use technology to influence decision-making, the technology is not going to work," he said. "So, for example, if you expect busy clinicians to go out of the way to click a button to pop up a report and say, 'Okay, what is the risk of the patient not doing well here?' They're not going to go out of the way and do that. The input needs to be at the right time at the right place, and it's going to actually influence decision making."
Further, clinicians are more likely to accept AI tool recommendations and capabilities if they know they have the final say regarding patient care. Given AI's limitations, clinical oversight is critical to patient safety.
We have to learn how to use AI tools to help us, and I think we're excited about the opportunities and what they can do, but we're also cautiously moving forward, making sure that we study them carefully, we validate them, which is really critical to ensure that they actually do what they say they're going to do.
Mark Larson, MDMedical director, Mayo Clinic Platform, and practicing gastroenterologist at Mayo Clinic
At Mayo Clinic, clinicians are ultimately responsible for what happens during a patient encounter regardless of AI use.
"For example, if you use an ambient listening tool, it's the duty and responsibility of the provider to review that note that is created and make sure that it is indeed accurate, and it summarizes the conversation appropriately, and that there aren't any missed words in there that might be misleading," he said.
Tool transparency, clinician involvement and clinician oversight are some of the tenets of building trust in AI; maintaining that trust requires an ongoing feedback loop.
According to Wright, VUMC conducts surveys and feedback sessions to understand gaps in clinician understanding of AI tools, instances where AI accuracy is eroding and usability challenges. The health system then makes efforts to improve the tools or adjust where in the clinical workflow they are integrated.
"If the user says, ‘I didn't understand why I got this suggestion,’ we would try to explain to them why they got that suggestion, but we would also try to go back and either make the system more accurate or at least make it more transparent and clear," he said.
Similarly, Duke Health and Mayo Clinic rely on ongoing feedback to maintain clinician trust in AI.
Poon noted that simply deploying the technology and ensuring initial adoption is not enough, especially with AI, where the stakes and uncertainties are higher. In addition to continually monitoring AI tool accuracy, the health system offers mechanisms for clinicians to raise concerns promptly.
At Mayo Clinic, the departmental working groups are responsible for ongoing assessments of tools and incorporating feedback to ensure the tools are not falling short in real-world clinical settings.
"We have certain set points to look back on as a group and say, ‘Is this tool, this solution, doing what it was promised to do? How can we make it better? What modifications do we have to the best practices, policy, et cetera.?’" Larson said.
The work of AI adoption does not end with placing the technology in clinicians' hands and telling them to use it. As Larson put it, at the end of the day, AI technology is a tool in a toolbox, much like a hammer. And like a hammer, it cannot be wielded without understanding and trust.
"You can't just put a hammer in your hand and expect it to do a job," he said. "You have to learn how to use that hammer. We have to learn how to use AI tools to help us, and I think we're excited about the opportunities and what they can do, but we're also cautiously moving forward, making sure that we study them carefully, we validate them, which is really critical to ensure that they actually do what they say they're going to do."
Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.