
your123 - stock.adobe.com
Navigating healthcare AI innovation and data privacy laws
As healthcare AI adoption grows, stakeholders must balance the intersection of patient data privacy and innovation, navigating evolving regulations along the way.
While healthcare AI adoption has the potential to enhance patient care, the tension between innovation and patient data privacy continues to grow.
Advanced models like generative AI rely on large datasets, often containing protected health information (PHI), to improve performance and reliability. However, under current privacy laws, the legal boundaries for using that data in AI development remain murky.
In this episode of Healthcare Strategies: Industry Perspectives, Adam Greene, partner at law firm Davis Wright Tremaine LLP, breaks down the legal complexities surrounding AI development and patient data. Recorded at HIMSS 2025, the episode outlines how healthcare stakeholders can navigate existing regulations, the role of AI governance and what a more pro-AI federal stance could mean for healthcare organizations going forward.
TRANSCRIPT TEXT STARTS HERE.
Adam Greene: We are probably going to see a lot more of a pro-AI approach with the new administration, and I think it'll fall even more so then to health organizations to make sure they're monitoring the risks that go along with that.
Kelsey Waddill: Hello, you're listening to Healthcare Strategies: Industry Perspectives, coming to you from HIMSS 2025 in Las Vegas. I'm Kelsey Waddill, a podcast producer at Informa TechTarget. Strong AI models for healthcare require a lot of patient data to develop. So how can healthcare organizations balance the need to protect patient privacy with the desire for more advanced technology to support healthcare stakeholders? I spoke with Adam Greene, partner at the law firm Davis Wright Tremaine LLP, to learn more about the legal landscape around AI development and protected health information. Here's what Adam had to say.
So, Adam, thank you so much for joining us on the Healthcare Strategies podcast. It's such a delight to have you on the show today.
Greene: Thanks for having me.
Waddill: And how's your HIMSS been so far?
Greene: Oh, great. Yeah. I'm part of the HIMSS Privacy and Security Committee, and it was great to actually get to meet people in person from that committee rather than our normal virtual calls and just had my presentation and it's great to get that out of the way.
Waddill: Yeah, it's so beautiful to be back in person with people. And before we get into the actual conversation today, I'd love to just let our audience get a little taste of who you are, what you do.
Greene: Sure. I'm an attorney with Davis Wright Tremaine. I'm a partner in the Washington D.C. office, and my practice focuses on health information privacy, security and breach response. And I've been with DWT for about 13 years. I work with healthcare systems, health plans, technology companies and others. And before that, I was a regulator with the U.S. Department of Health and Human Services doing HIPAA over there first in their office general counsel and then the Office for Civil Rights.
Waddill: Awesome. Amazing. Well, you're definitely the right person for this question then. So, I want to start us out a little bit broad and then we're going to dig in a little further. But I'm curious if you could just walk us through a little bit about the patient data side and how that factors into AI development. And then we're going to talk a little bit about the policy side as well, but just generally speaking, how is patient data allowed to be used for AI development right now?
Greene: Sure. So obviously, with ChatGPT, generative AI has really caught everyone's headlines and attention over the past few years, but it's actually not that new and it's been used in healthcare for quite some time now. And there's been long studies with respect to AI potentially being able to detect tumors in radiology images, for example, as well as radiologists and the potential use for AI to approve clinical decision support for physicians — that sort of thing. There's been experimentation with AI automating the process of taking clinical notes and having doctors spending less of their time typing into electronic health records. All this, though, requires appropriate input.
AI cannot detect tumors unless it has a large amount of protected health information where it can see tumors and identify what is and is not, and cannot go ahead and automate processes like filling out EHR clinical notes without access to the data. And there's definitely a... It's very data-hungry with respect to protected health information. And part of the challenge here is the privacy rule under HIPAA was finalized back in December 2000 — so at this point, almost 25 years ago — and hasn't really been fundamentally updated, certainly not with respect to AI.
There's a lot of tools and permissions under HIPAA, things like healthcare operations and research. It's just, how they potentially apply to AI is something we have not received any guidance on. So, for example, is developing AI... We think of it as research and development from a commercial standpoint. Does that qualify as research under HIPAA? There are arguments, “yes”; there are arguments, “no.” We don't really have any guidance. I think it's pretty clear that you can use AI to help the healthcare operations of a covered entity allow a hospital system to reduce costs or improve the quality of its care. But at what point are you no longer primarily doing it for the healthcare operations of the hospital now but rather doing it more for other purposes that may not be permissible under HIPAA, like just developing your own commercial products. That's where things can get really fuzzy, and the walk can be unclear.
Waddill: Yeah, that makes a lot of sense. Obviously, we've been waiting for something clear for a while, but as we are currently operating under HIPAA and PHI regulation at the moment that is currently standing, could you just distill for us a little bit about how that impacts the process of AI development? I'm sure that's a very broad question and you've touched on a couple of things there, but how has that either hindered or even maybe helped or protected the process as we are moving into the future?
Greene: Yeah, well, knowing that there's a framework I think instills everyone with a greater level of confidence with respect to the protection of their data. So, I think patients feel good about the fact that there is HIPAA and other laws protecting their data. I think the downside, especially where there's legal ambiguity, is regulation does tend to impact and impede innovation to some degree. I think with AI development, there's structured data, and being able to de-identify structured data is not that complicated, and that de-identified data that can then be used for AI development. Where I think we see a lot more challenges is unstructured data like clinical notes where it's just free text of a physician writing down information about their encounter. That can be very hard to de-identify. There are not perfect automated solutions to do so. It can take a lot of manpower and without it being de-identified, it then becomes a lot less clear to what extent that information can be used to improve AI.
Waddill: Mm-hmm.
Greene: There are arguments that can be made that, for example, going back to the example of the diagnostic AI that may look at images and identify tumors, that's a classic treatment purpose. And there's an argument that using the PHI to improve that service is part of delivering the service and part of treatment. And so, there's an argument that could be made, “That's perfectly fine under HIPAA,” but others might say, “No, there's a really big distinction between what's necessary to provide the service versus improving the service,” and that “HIPAA does not necessarily allow for improving the service.”
Until we get clarity that potentially is positive towards helping to innovate, I think there's always going to be some concerns, some organizations that are going to be very risk averse, and therefore less use of PHI to innovate and develop and improve AI.
Waddill: And how are healthcare stakeholders, I'm thinking particularly of providers who often, as you mentioned, the EHR data is very key to a lot of this, although payers house a lot of patient data as well, but how are providers navigating this space under the current structure? How are people currently dealing with this? And maybe there isn't a uniform approach, but...
Greene: It's a risk decision is what it comes down to. I think a lot of healthcare providers are very excited about everything that AI brings to the table and [what it] can potentially do to help patients. And when faced with regulatory ambiguity — as to, “Is this permissible under HIPAA? — there's an argument there that it is, but there's some risk there that a regulator could disagree. Some organizations will take that risk; others won't. I think many organizations are willing to accept some legal risk to move ahead because they have a strong belief that AI is the future of healthcare, and they want to be part of that.
Waddill: These decisions are housed within certain parts of the healthcare organization, but when it comes to dealing with this kind of problem or conundrum of trying to protect patient data and really wanting to see AI develop and mature in their organization, for those who perhaps are outside of the scope of decision-making but who are really intrigued by all of this, what would you say to those folks about just trying to be a part of the solution?
Greene: Yeah. Well, I'm seeing more and more clients looking at AI governance and looking at really standing up a strong AI governance program. If you're a CEO, you're not necessarily going to be involved in the micro-decisions as to what specifically gets done, but what you can be involved in is making sure that there's a good AI governance program that has the right stakeholders. And so representation from legal to say what is legally permissible under privacy laws, under new AI laws that are coming out; representation from compliance to help operationalize that; [and] PR team to recognize that even if you're legally doing it right, are you comfortable with if this was to end up on the front page of your local newspaper? To the extent that there still are local newspapers.
Obviously, having the technology people, the CIOs, the CMIOs at the table. We're just making sure that we don't have this being done in a vacuum. The worst thing is you get the business side that just sees shiny new toys and pushes things unilaterally and everyone else is along for the ride. But rather, instead, you want to make sure there's a governance process where you have all the stakeholders, they all have appropriate voices and business decisions are weighed against risk and that sort of thing.
Waddill: Excellent. Well, I always finish out these kinds of conversations just by asking: Is there anything that I might not have touched on that you think is important to add here in this conversation about balancing AI development and protecting patient data? Or anything that we did touch on that you think people should know a little bit more about that we should dive a little bit more into with our last couple minutes here?
Greene: Well, I think it'll be interesting to see what happens under the new administration. Right now, under HIPAA, there's various regulatory issues that are being addressed and that are going to be priorities. And there's recent amendments to reproductive healthcare privacy that are under attack. There's a notice of proposed rulemaking with respect to the security rule that is currently open to comment, and so I expect that HHS is going to be focusing on some of those fires first. But I think AI is probably top of the list afterwards of an area that they are very engaged in. And I think under the last administration, it was very kind of, “pro-AI, but we have to do so with a whole lot of concern about the risks.” And I think the new administration is much more, “AI needs to be unleashed. We need to be leaders in this space.” And so, in healthcare, I think we're probably going to see a lot more of a pro-AI approach with the new administration, and I think it'll fall even more so then to health organizations to make sure they're monitoring the risks that go along with that.
Waddill: Yeah, that makes a lot of sense. And actually, on that note, I have one more question.
Greene: Sure.
Waddill: What advice would you give to healthcare organizations at this moment, when there's this expectation that there's going to be a lot of free rein with AI? That there's going to be less, rather than more, regulation? How should healthcare leaders approach that from an internal governance standpoint? You've just touched on that a little bit, about their responsibility in that case to lay the groundwork, but is there anything specific you would point out to them to prioritize as we're entering that kind of regulatory space?
Greene: Just, I think always reminding yourself that the legal questions are key, but they're the beginning, not the end. Figuring out what you legally can do under AI should not be the last step. It should not be the thing that's decided when the business has already decided this is a priority. It should be one of the first steps, but then it really should just be a first step looking at, “Does this align with our organization's mission?” Once again, “Do we feel like patients would be happy about this, would be comfortable with this? Does this pass the “ick” factor?” Some of those things, once you've got past legal questions, don't forget about these other questions. You want to be proud if your story is on the front page of the paper. Never ashamed.
Waddill: Yeah, absolutely. Well, that is unfortunately all the time that we have for this conversation, Adam, but thank you so much for coming on to Healthcare Strategies and for sharing your insights with our audience. And hopefully we’ll get to have you back on some point and have a great rest of your time at HIMSS.
Greene: Well, thank you. You too. Thanks for having me.
Waddill: Thank you. Listeners, thank you for joining us on Healthcare Strategies: Industry Perspectives. When you get a chance, subscribe to our channels on Spotify and Apple and leave us a review to let us know what you think of this series. More industry perspectives are on the way, so stay tuned.
This is an Informa TechTarget production.
Hannah Nelson has been covering news related to health information technology and health data interoperability since 2020.