Getty Images
Will Clinicians Become Dependent on Artificial Intelligence Tools?
Providers from Sentara and UC San Diego Health weigh in on whether clinicians will come to rely too heavily on AI and what to do about it.
Artificial intelligence (AI) technologies have garnered much attention in the healthcare industry in recent years, but the hype raises significant questions and concerns.
How should these technologies be developed? For what use cases? With whose data? How can these tools be designed to avoid bias and promote health equity? How should such models be regulated? Should they be deployed in the clinical setting or only for administrative tasks?
Answers to these questions have been proposed and debated for as long as health AI has been discussed. However, another question is particularly salient as health systems begin deploying AI in earnest: will clinicians become dependent on these technologies, and how should that be handled?
In recent interviews with HealthITAnalytics, Joseph Evans, MD, FACP, vice president and chief health information officer at Sentara Healthcare, and Christopher Longhurst, MD, MS, chief medical officer and chief digital officer at UC San Diego Health, discussed these questions and how health systems interested in pursuing AI deployment can navigate concerns about their clinicians relying too heavily on the tools.
HOW LIKELY ARE CLINICIANS TO BECOME DEPENDENT ON AI?
Evans and Longhurst both emphasized that clinicians are not likely to become over-reliant on AI as long as they can understand the reasoning behind AI recommendations and view the tools as assistants rather than replacements for their expertise.
“This is actually an issue that [healthcare has] faced for many decades. I was just looking at a 2012 paper that was a systematic review of all the studies done on automation bias in healthcare in the 1980s, '90s, and 2000s. And so, this is really not a new topic,” Longhurst explained.
He indicated that during the introduction of electronic health records (EHRs) and clinical decision support systems, there were concerns about the tools creating automation bias in clinicians, leading to over-reliance.
However, Evans noted that most clinicians want insights into how an AI tool works and are unwilling to adopt a tool they don’t understand.
“In general, clinicians are pretty good about [and] really love being able to see what's in the black box,” he said. “[They want to know] ‘Why does it think my patient has a likelihood of being admitted to the hospital this year or going to the ED?’ and being able to see all those variables that went into [the AI’s prediction].”
Evans stated that, if anything, clinicians have a “hunger” for increased transparency and insights into the “why” of AI-generated recommendations. In short, clinicians don’t want black box solutions but a co-pilot to help them do their jobs better.
“As long as clinicians know why a tool arrived at its conclusion, most will embrace and welcome it,” he indicated.
Current data reflect this, with many reports indicating that providers' perspectives on AI are concerned with integrating the tools into clinical workflows and receiving appropriate training around a tool’s use.
Longhurst echoed these sentiments, noting that today, automation bias exists in many industries, including healthcare. AI and generative AI, in particular, provide many new opportunities to bolster clinical decision support systems. But, because advanced technology is not new in healthcare, experts already have some ideas on preventing and addressing concerns about over-reliance.
PREVENTING AND ADDRESSING OVER-RELIANCE
“The more that we can create that transparency, especially around the newer [technologies] in the hype cycle — large language models, ambient listening, and summarization tools — the onus is always on the clinician,” Evans explained.
Just as clinicians are responsible for signing notes when leveraging voice recognition technology, he noted, they should be actively involved in the AI-driven decision-making process. He also advocates for establishing guardrails and other governance structures when AI is used to influence care decisions.
Evans further indicated that Sentara’s approach typically involves deploying an AI tool without significant fanfare and assessing its performance over time. This allows the organization to test the tools in the clinical environment without care teams relying too heavily on them.
If a tool performs well, it is continuously monitored under a standard framework to prevent model “drift” — a phenomenon in which a model’s accuracy degrades over time due to shifts in the input data used to generate predictions or model outputs.
This helps ensure that the tool’s performance remains aligned and consistent with expectations over time, which is crucial for models that could impact patient care and safety. If clinicians understand how the AI generates its outputs, they can also flag potential issues with model performance and prevent adverse outcomes.
UC San Diego Health utilizes a similar approach, with AI tools for potential use in the health system undergoing assessment by a clinical decision support committee and an AI ethics committee. The organization also relies on a set of AI principles to help ensure that the tools are used safely and effectively.
One of these core principles is that there is always a human in the loop when deploying an AI tool. Longhurst illustrated this by describing the method UC San Diego Health used when incorporating generative AI into its electronic health record (EHR) system to help clinicians respond to patient messages.
Essentially, clinicians must review and edit an AI-drafted response before it’s sent to a patient, which avoids some of the potential for automation bias.
“Our doctors have been using templates to respond to patient messages for a long time,” Longhurst indicated. “Now, we're constructing a draft patient message that a physician is editing… There is no ‘send now’ button.”
Instead, a clinician can ‘edit draft message’ or ‘start [a] blank reply.’
“That ensures there's always going to be a human in the loop because there's no opportunity for a physician to just send that message without editing,” he explained. “And so, with rolling this out, we reminded our clinicians, just like any note template, they are ultimately accountable for what's written there.”
Longhurst further noted that clinicians in the health system undergo ongoing training on how to use these templates and other AI tools responsibly. While these may make tasks like writing notes or responding to patients easier, “at the end of the day, that note that they sign is a medical-legal document, and everything in there had better be accurate,” he said.
UC San Diego Health also prioritizes being “maximally transparent” with patients about using AI and automation. Messages sent using the EHR-integrated generative AI have a signature line at the bottom indicating that the response was automatically generated and includes the editing clinician’s name.
Longhurst noted that this approach has garnered significant positive feedback from patients.
“They appreciate the transparency, but they also appreciate that our clinicians are busy people. Sometimes our patients feel bad about messaging them, [but] we don't want them to feel that way… If I'm a patient, I'm also glad that my doctor reviewed it, and I didn't just get the automated reply.”
By committing to transparently using AI to augment clinicians’ decision-making, Evans and Longhurst indicated that health systems can explore the potential of these tools without necessarily rushing into AI adoption.
ADVICE FOR HEALTH SYSTEMS PURSUING AI
Longhurst underscored that healthcare is inherently risk-averse, making hesitation to deploy these tools understandable.
Smaller healthcare organizations are often not in a position to be early adopters of AI technology, but they can still benefit from others’ experiences using the tools.
“Because places like UC San Diego Health do have resources allocated to this and are functioning as early adopters, one of the things we're doing is working very hard with our teams to mitigate these unintended consequences and testing these systems to ensure they're delivering outcomes we want,” Longhurst explained. “When that type of evidence comes to light, hospitals and practices can feel a greater degree of confidence that they're adopting something that's been tested rigorously.”
He likened the process to testing a newly developed medicine, which requires clinical trials to ensure that it is safe, effective, and has minimal side effects.
In the context of healthcare AI, Longhurst advocates for using a similar method: testing tools, gathering data and evidence, publishing the findings, and applying an evidence-based approach to AI adoption.
By leveraging an evidence-based healthcare informatics approach for healthcare technology, Longhurst noted that early adopters can play a key role in ensuring that AI technologies are ethical, effective, and responsibly deployed.
“[Healthcare organizations] don't all have to adopt this immediately,” he said. “We don't all have to run out and contract with vendors. Let's figure out what actually makes a difference, and let's figure out the most efficient ways to [utilize AI].”
For health systems that do want to be early adopters of these technologies, Evans recommended creating an AI governance structure that prioritizes transparency and guides how stakeholders approach concerns around the tools.
“One of the key tenets here is there always has to be clinician oversight,” he noted. “Then, as [stakeholders] evaluate these different solutions, account for patient privacy, data governance, and transparency.”
Doing so requires testing that a tool is fair and unbiased, but also necessitates ongoing monitoring throughout the model’s lifecycle to avoid performance dips and other issues. Further, Evans indicated that a crucial part of any AI governance structure is assembling a multidisciplinary team of clinicians, legal advisors, and experts in privacy, risk management, diversity, bias, and other areas.
He noted that if healthcare organizations establish that structure and stick to it, they will have better chances of deploying AI safely.
Evans and Longhurst both indicated that they see significant potential for AI in the healthcare sector, particularly around workforce support and performing administrative tasks.
“One of the places where AI has the most potential is reducing the administrative overhead,” Longhurst explained. “All the people we employ at revenue cycle and coding and other things could be made more efficient by these tools that help them do their job in a way that requires less resources.”
“All of our clinician workforce is stressed, and being able to [take advantage] of those low-risk entry points to leverage [AI] is a great opportunity,” Evans stated.