Getty Images
Top Health IT Analytics Predictions, Priorities for This Year
Health IT analytics and artificial intelligence experts say that healthcare organizations should focus on AI governance, transparency, and collaboration in 2024.
As health systems work to address challenges caused or worsened by the COVID-19 pandemic, such as health inequity and chronic disease management hurdles, many are turning to data analytics and artificial intelligence (AI) tools.
However, recent questions around safe and ethical healthcare AI — including how health systems can implement it, the role of collaborative efforts, and how guidelines like Biden’s Executive Order on Trustworthy AI will influence policy and regulation — have presented healthcare organizations with additional factors to consider before moving forward with their analytics priorities.
Leaders from Forrester, PwC, and Duke AI Health discussed their AI and analytics predictions for 2024 with HealthITAnalytics, including what healthcare stakeholders should prioritize in the new year.
GENERATIVE AI
Generative AI dominated much of the health IT conversation in 2023, and the industry’s interest in the technology is set to continue into 2024.
“Healthcare has been sitting on so much data that hasn't been all that useful to us, but only because it's unstructured,” explained Shannon Germain Farraher, MBA, BSN, senior analyst, Healthcare Digital Strategy & Innovation at Forrester. “There's been such a manual effort to create structure around it, and now we have gen AI that can ingest these structured and unstructured documents and text, [which] allows for wide-ranging and very far-reaching applications.”
A few of these use cases, including searching electronic health records (EHRs) for patient information, generating discharge summaries, searching for provider information on websites, optimizing clinical trials, and improving the prior authorization process, are already on the rise, she indicated.
Additional use cases focusing on efficiency gains, time savings, and improved clinical decision-making will also likely emerge in the new year.
However, the introduction of generative AI models in healthcare has also raised awareness about the risks associated with AI more broadly, such as trust, bias, and the technology’s impact on the clinical space.
“AI is not something new in healthcare; we've been doing this for many years. What is new was the introduction and the pervasiveness of the opportunity that gen AI and large language models present,” noted Thom Bales, principal, US Health Services Sector leader at PwC.
The widespread accessibility of generative AI tools has led to a kind of democratization, allowing individuals to experiment with AI and gain insight into its potential.
“Folks have equated it to the very first time a cell phone came out, or the very first time that the internet came out, and we weren't really quite sure how to use [either],” Bales stated. “It's a very apropos kind of comparison.”
Both cell phones and the internet evolved rapidly following their respective introductions, and the same is likely to occur with generative AI. But, the transformative potential of these tools, particularly in the healthcare industry, will require stakeholders to invest heavily in governance frameworks to reduce risk.
“[Generative AI] raised the awareness around risk and governance in a way that prior machine learning and artificial intelligence really had not,” Bales said.
While he acknowledged that there is significant potential for generative AI in the clinical space, he underscored that prior to implementation, healthcare organizations must step back and consider which use cases the technology is best suited for and how to govern it within those contexts. Further, clinicians and care teams must be trained to effectively and safely utilize generative AI tools.
AI GOVERNANCE AND TRANSPARENCY
In recent years, healthcare stakeholders have invested billions of dollars into AI research, development, and deployment, demonstrating the perceived value of the technology to transform the industry, Germain Farraher stated. However, Forrester insights from 2023 indicate that security and privacy are top priorities for healthcare leaders.
“What that leads into is that health systems and other organizations need a solid AI governance framework,” Germain Farraher said. “There's going to be legalities down the line because there's a lot of regulation coming out around AI right now. There's no signs of cyberattacks slowing down, especially when it comes to the vulnerable healthcare system. So, there's going to be this demand for AI platforms and infrastructure.”
She noted that healthcare organizations must engage in true AI governance to effectively monitor the security, bias, efficacy, and quality management of AI-assisted workflows rather than simply taking ownership of a platform or the infrastructure around it. Additionally, healthcare stakeholders will need to invest in filling talent gaps, acquiring new technologies, and securing third-party support where needed to successfully leverage AI tools.
Further, Germain Farraher indicated that the concept of “bring-your-own-AI (BYOAI)” is cause for healthcare stakeholders to create an AI governance plan.
BYOAI refers to how the growing accessibility of AI tools, especially generative AI, has led employees to utilize their own AI tools at work because they view them as more efficient than those provided by their employers.
While Germain Farraher stated that there are likely to be fewer “BYOAI” efforts in healthcare due to the sensitive nature of health data, thinking about employees and their potential use of AI on the job could help healthcare organizations avoid pitfalls.
Even so, she underscored that these technologies are assistive and that the human component of the healthcare experience must remain a key focus of ongoing AI efforts.
“Technology is not the ultimate liberator, and this is basically the clinician's view of technology. They're not sitting around waiting for the promises of hyped tech. They've lived through EHR integration over the past decade, and that was enough to make them a little jaded, but also realize that tech is not just going to overhaul burnout. They know better. Their daily grind is so different than the conversation that chief technology officers have on a daily basis,” she said.
Some health systems have begun to develop their own AI governance frameworks and are leading the charge for health AI oversight.
Duke Health is one of these organizations, and its Algorithm-Based Clinical Decision Support (ABCDS) Oversight program was designed to oversee, evaluate, and monitor all algorithms deployed in the health system. In 2022, the health system became one of the founding members of the Coalition for Health AI (CHAI), a group of academic, industry, and government partners working to ensure that health AI is responsible, ethical, and trustworthy.
A key aspect of these efforts is promoting transparency and governance, which starts with ensuring that models are trained on high-quality data, according to Nicoleta J. Economou, PhD, director of Governance and Evaluation of Health AI Systems at Duke AI Health and ABCDS Oversight director at Duke Health.
“The models are as good as the data that we feed into them, so the quality of the data is something that I think is a huge challenge,” she stated. “There are tools that you can use in order to get that right, but it's definitely a requirement in order to ensure that you are developing a model properly.”
She further noted that health AI currently exists in a regulatory gray area, making frameworks for transparency, accountability, bias evaluation, and clinical impact assessment crucial to the successful use of health AI.
Economou expressed optimism that the healthcare industry is moving in the right direction in terms of AI transparency and governance but noted that further government mandates are needed to ensure that these technologies are leveraged responsibly.
To that end, she noted that ongoing collaboration among industry stakeholders and the federal government will play a vital role.
INFRASTRUCTURE AND COLLABORATION
Disparities in tech maturity at healthcare organizations are a major hurdle to increased collaboration and AI innovation, as health systems with the resources to invest in these areas will continue to advance and reap the benefits, while smaller organizations may not benefit at the same pace.
“Like many things, generative AI and AI overall require resources to master them,” Bales explained. “In a world of finite resources, that therefore positions [healthcare organizations] that have those resources, which tend to be our larger organizations today, to disproportionately benefit from this.”
This phenomenon creates opportunities for larger organizations to build AI ecosystems and collaborate with others in similar positions. Thus, those positioned to take advantage of health AI will do so, which may lead to them becoming better positioned to continue to do so in the future.
As healthcare organizations progress along the digital maturity scale, more traditional health systems may not have the resources to do the same, which could increase the technological divide in the healthcare industry.
Some of the innovations and successes driven by the larger organizations may “trickle down” to smaller, more traditional organizations as time goes on, Germain Farraher indicated, but not enough to prevent some adverse impacts.
“Employees are going to feel that, operational efficiency’s going to take a hit, and then, of course, there’s going to be the consumer experience that is also going to suffer,” she noted.
Focusing on the human aspect of healthcare – the patients, members, and healthcare workforce – can help organizations bridge the digital maturity gap. Leadership must be mindful of this as they embark on digital transformation efforts.
“For some of the smaller health systems, I think we're going to need to also find ways to resource them appropriately so that they can also streamline and eliminate clinician burnout,” Economou said.
But it’s the differentiators with strong, plausible AI infrastructure that are likely to drive much of the innovation in the health AI space.
“That gap among health systems is something that we definitely have to keep an eye on in 2024 because these large academic-backed health systems are going to keep progressing,” Germain Farraher said. “I think this year, and maybe the next few years, we're going to start to see market winners and differentiators with proven accuracy and proven security.”
Economou emphasized that building a strong AI infrastructure requires a cultural shift within an organization to think more broadly about AI best practices.
“When we think about AI, it's not only AI, it's also, ‘How are we presenting these AI modes in the interface?’ Then, the interface and the model become essentially an AI system, an AI technology. So, then best practices of the software development life cycle would apply,” she explained.
She further indicated that expertise from data scientists, clinicians, and other technical and end-user stakeholders must be married within the health AI infrastructure to ensure the tool is effective, equitable, and safe.
“We also want to make sure that we have multiple stakeholders involved… [that we] are hearing their concerns, and are making sure that they're educated about AI. Because our exercise here is not only to raise awareness but also enhance the credibility of some of these AI technologies,” Economou said.
Germain Farraher echoed this, emphasizing the need for strong collaborations to drive health AI innovation.
“[Healthcare organizations] want to get the right people around the investment, and you want to make sure to clear the smoke because there's a lot of hype around AI right now,” she stated. “Before you approach a vendor, you really want to know who your healthcare organization is, what your needs are, and then approach vendors, as opposed to looking at a vendor saying, ‘Hey, what can you do for me?’ You want to go in with the right mindset of, ‘Can you meet my needs?’”