Chinnapong - stock.adobe.com

How the conversation around healthcare AI is evolving

Healthcare AI dominated HIMSS25, with leaders sharing both successes and challenges, highlighting the technology's rapid ascent and ongoing evolution.

Another year, another HIMSS where AI was top-of-mind for healthcare providers, payers and technology developers. However, the conversation around health AI has evolved, generating new and important discussions at the widely attended healthcare conference.

With health AI use cases proliferating, the technology is offering exciting new opportunities to ease common provider burdens and enhance patient care. From clinical documentation to risk stratification, AI offers wide-ranging healthcare benefits. However, the promise of AI is stymied by the potential risks of AI use, including biases, errors and data privacy breaches

Providers and patients have legitimate concerns regarding AI use; however, AI innovation is marching forward, leaving healthcare stakeholders to wrestle with its pros and cons. With the federal government changing course on how it plans to approach AI regulation, the industry is addressing these challenges and preparing to create guardrails primarily on their own.

In this episode of Healthcare Strategies: Industry Perspectives, recorded at HIMSS25, Rebecca Pifer, senior reporter at Healthcare Dive, discusses how stakeholders' approach to healthcare AI is changing, the new AI tools grabbing healthcare leaders' attention and what the lack of federal oversight means for the health AI arena amid rapid innovation.

Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.

Transcript - How the conversation around healthcare AI is evolving

Rebecca Pifer: It's been really interesting seeing the conversations at HIMSS around artificial intelligence mutate over the past few years, because it has really just moved so incredibly quickly.

Kelsey Waddill: Hello You're listening to Healthcare Strategies, Industry Perspectives, coming to you from HIMSS 2025 in Las Vegas. I'm Kelsey Waddill, a podcast producer at Informa TechTarget. This is our last episode of this season of Industry Perspectives, the podcast series where we go to healthcare conferences to bring you the best insights. No plane tickets required.

To tie it all off, we will be recapping the main themes of HIMSS '25 with insights from senior reporter at Healthcare Dive, Rebecca Pifer. Rebecca will share about what she was hearing from interviews and panels at HIMSS, and we even touch on a potential theme for next year's conference. So, HIMSS, are you listening? Here's Rebecca.

Rebecca, welcome to Healthcare Strategies. Thank you for coming onto the show today.

Pifer: Hi, Kelsey. Thank you so much for having me.

Waddill: My pleasure. So, before we get into the conversation here and all of your experience at HIMSS '25, could you just go ahead and introduce yourself to our audience? Tell them a little bit about yourself and what you do?

Pifer: Yeah, so my name is Rebecca Pifer. I am the senior reporter on Healthcare Dive, covering a variety of topics across the healthcare industry, but I focus primarily on health insurers, but I do a lot of work on other topics like digital health and regulatory news as well.

Waddill: Excellent. So as you were at HIMSS '25, I'm sure you noticed one of the big themes this year was artificial intelligence. It was everywhere. And just wanted to start out big picture. What were some of the key takeaways in the conversations that you had with AI experts at the conference this year?

Pifer: Yeah, AI definitely was everywhere. It's been really interesting, I think to your point, seeing the conversations at HIMSS around artificial intelligence mutate over the past few years because it has really just moved so incredibly quickly. So, at HIMSS two years ago in 2023, ChatGPT had just come out. So, the conference was very, very focused around generative AI and how the healthcare industry can use the models underpinning generative AI to reduce administrative burden and help with the operations and delivery of patient care. And that really was the focus of the conference.

And then last year, companies pivoted, I think, more to focus on governance. So, okay, this technology exists. We're starting to use it. There are startups and tech giants offering it to us, so who should we partner with and what processes should we stand up to ensure it's doing what it's meant to do, and that we're not introducing any mistakes into clinical care?

And then I think this year brought both together a lot. So there was a lot of hype around the technology as there always is at these HIMSS conferences, mostly around agentic AI, AI agents, and then also multimodal AI and the latest advances here. But then also there were a lot of concerns about governance and what that will continue to look like, especially given what's going on in Washington and given that we're not likely to have a federal roadmap anytime soon for organizations that want to adopt this.

Waddill: Right. Yeah. Thank you for giving that trajectory. That's interesting how it's fused from the past two years into just one massive conversation about all of these kind of challenges that we're facing with AI and how to work around them. And I wanted to dig a little bit into some of those ones that you just mentioned. So, as you alluded to, the current administration is leaning less into regulating AI, more into deregulation. And as you mentioned, that's been a major conversation point amongst healthcare organizations of how do we address AI and oversight of AI without federal oversight involved or as much federal oversight. So, I just wanted to hear what you were gathering from the healthcare systems you were talking to, how they are approaching AI oversight in a deregulated era.

Pifer: Yeah, so I think that's really the key question at the moment. This is a huge, huge focus for health companies that are using AI. And I think too, it's worth noting here that predictive AI or more traditional algorithms, these have been in use by healthcare companies for decades. So, it's not that AI itself is new. It's that the current form of AI, especially generative AI, which we're talking about, is significantly more difficult to oversee than more traditional models are.

So, a lot of stakeholders are proponents of risk-based oversight. So, basically, stricter or more rigorous oversight of AI, like generative AI that has a degree of subjectivity, but especially more rigorous oversight of AI that's being used in riskier situations. Essentially, the closer AI gets to affecting patient care, the more of a handle we should have on it. It's one thing to have an algorithm help you assign patients to beds in a hospital, and it's another thing to have a generative AI answer medical questions for you and then put those answers into the EHR. So, what we're talking about with governance depends, I think, a lot on how hospitals are using the tech specifically.

But there's a lot that companies are doing in terms of governance. So maybe starting just with the hospitals, the academic medical centers, the well-funded nonprofits, the companies with the resources to be buying and implementing this technology right now that are standing up a lot of governance processes. They say they're being really careful. So, big things that we're seeing are centralized governance bodies. So, organizations will pull together stakeholders from a variety of places across the hospital, like lawyers, doctors, nurses, IT staff members of the C-suite, bringing in all these experts to figure out how are we going to use AI? How is it going to affect our specific patient population? How are we going to handle oversight? How do we ensure it's not perpetuating biases? Basically, just having these ongoing conversations to figure out how the AI is meant to perform, and then how they're going to track its performance from the AI being developed all the way to the AI being monitored as it's being used in the system to make sure that they're catching any errors.

And then there are things that they can think about, like grounding. Does your AI tool cite its sources, basically? And prompt engineering. Are you training your medical staff to know how to ask the AI the right questions so you'll get a more reliable output? So training is very, very important here, and we're seeing a lot of organizations focus on that. And then the big check that a lot of stakeholders are proponents of is something called human-in-the-loop. So, the idea is to have a person checking the AI's output frequently and grading any errors so that the AI can learn from its mistakes and improve over time. And then hopefully you can pull that person out a little bit, and you don't need someone checking the AI 24/7.

And then the tech companies creating these tools say they're doing a lot of backend monitoring and validation work. And same with the EHR companies that are putting AI into clinician workflows like your Epic and your Cerners and Meditechs. They all say they have pretty stringent checks within their workflows as well. And then on top of that, there's also industry consortia that have been created that hospitals can turn to if they need advice. And then also some states have some laws around governance, and there are patchwork rules from the CMS and from the FDA and some federal agencies around AI. But yeah, we just don't have an overarching federal roadmap here. So, there's a lot going on in this space, as you can tell from my rant there.

Waddill: No. Yeah, there's a lot. It sounds like organizations are pulling together a ton of different strategies to make something that is a scaffolding around their AI projects, their AI programs, their initiatives. So that's good to hear. That's also probably a lot for organizations to sift through in terms of what works best for each of them, but great to hear about what's percolating there.

I know that you also mentioned agentic AI, and that's been a major buzzword in the last year or so, even. So, just trying to cut through some of the hype, especially that can happen at these giant conferences. Did you encounter any best practices around agentic AI implementation that really stuck out to you? And if so, what were they?

Pifer: I think it's a good question. I think AI agents are still relatively new. I'm not entirely sure we're at the best practices stage yet for this technology. Companies like Google and Microsoft and Salesforce that are either building and selling agents directly to the healthcare industry or helping the healthcare industry build their own agents. They just started offering these in the middle of last year to early this year, so, I think we're still very much in the hype phase. But I think best practices are probably being worked on at the moment, but I think a lot of those would probably be the same as what you're talking about. Like agentic AI, if you have these semi-autonomous AI tools, I think a lot of organizations are going to focus really heavily on making sure there's pretty stringent human oversight until the AI has proven itself frequently enough that maybe they can step off a little bit.

Waddill: Yeah, that makes sense. Was there anything about these conversations that you were having that surprised you about the direction that AI utilization is going, AI implementation?

Pifer: Yeah, I think a lot has. That's one thing that's so fun about these big conferences is that I think I always learn a lot, and you always hear a variety of perspectives that maybe you didn't expect to. I think one thing that always has really stricken me as I've been reporting on this is there's such a variety of comfort level with these. You have tech companies selling these tools. You see this a lot with ambient listing and clinical documentation tools. They're like, "Clinicians love this. They can't get enough." And then you'll talk to a health system and they're like, "We struggled to adopt this because our doctors were leery of it and culture change was such an issue."

And I think a lot of that is reflected in the ongoing conversations about governance. Even some executives at tech firms that I spoke to were relatively concerned about the lack of oversight from the federal government, which is almost surprising from the private sector. You don't normally have people in the private sector saying, "We want more oversight from Washington." But I think some people are concerned that without a comprehensive federal roadmap, it could stifle innovation. Because if states step in to fill that void, we could see even more of a patchwork of legislation of, "Okay, you're allowed to do this in California, but you can't do this in Nevada." If you're a company that offers an EHR for medical practices in both states, are you going to want to implement AI tools if you can only use them in half your market? Not necessarily. So, I think people's comfort with the current situation just depends on how averse they are to risk overall. So that's just an interesting tension that I just really don't see going away anytime soon.

And then I think the other thing that really struck me from the conference this year was that we've moved on a little bit, I think, as an industry from the whole, 'will AI replace clinicians conversation,' which is fascinating because that's all anyone was talking about back in 2023. And with agentic AI, which you and I touched on, we literally have a technology here that can potentially replace a member of a medical practice's administrative staff. Not a clinician. But we have a technology that can do a lot of the jobs that administrative staff in the provider setting are currently doing.

Then we also have, on the other hand, large language models that are being trained exclusively on medical data that are performing better than human doctors on licensing exams. So, I'm really not sure why this conversation has gone quiet. Maybe it's just not one like the industry wants to have at the moment. But yeah, I thought that was really interesting about HIMSS this year.

Waddill: Yeah, that is curious that the administrative staff side of things is not making more noise, I guess, about potential replacement, but we'll see where that goes. And I guess for the time being, it seems like at least for the most part, we still need humans. As you said, the common phrase, human-in-the-loop. So, no matter what, it's not at the stage of full replacement yet entirely.

Pifer: Definitely not. It's interesting too, because a lot of hospital executives, to head off concerns about people losing their jobs, they just talk about retraining and the importance of instead of saying, "I can do the same job that artificial intelligence can do," the workforce in the future will need to say, "I can oversee 10 AI who can do 10 people's jobs." Obviously, a whole host of issues with that, like who's going to pay for it? Those programs don't exist in a lot of areas. How is this going to affect people in less-funded hospitals? There's a lot up in the air.

Waddill: Yeah, maybe questions that will be addressed at HIMSS '26. We'll see. Well, Rebecca, thank you so much for coming onto the podcast. Unfortunately, that's all the time that we've got for this episode, but hopefully we can have you back some time to dig more into some of this stuff. But thank you for coming on.

Pifer: Yeah, happy to. Thank you, Kelsey.

Waddill: Listeners, thank you for joining us on Healthcare Strategies, Industry Perspectives. When you get a chance, subscribe to our channels on Spotify and Apple and leave us a review to let us know what you think of this series.

This is an Informa Tech Target production.

+ Show Transcript

Dig Deeper on Artificial intelligence in healthcare