Yuichiro Chino/Moment via Getty
Healthcare industry sees increased investment in generative AI, LLMs
One-fifth of all technical leaders in the healthcare and life sciences industries report over 300% growth in budgets allocated for generative AI projects.
John Snow Labs recently published findings from its first Generative AI in Healthcare Survey, which revealed that healthcare and life sciences organizations are increasingly investing in generative artificial intelligence (AI) projects.
The survey, conducted by Gradient Flow, sought to explore trends, tools, behaviors and priorities around the use of generative AI in these sectors. The survey polled 304 participants across job roles from organizations of various sizes.
Of these, 196 individuals reported being employed by organizations actively engaged in evaluating, utilizing or deploying generative AI. Roughly 47 percent of respondents work for large organizations – defined as those with over 5,000 employees – and 37 percent of respondents hold leadership positions in their organizations.
The survey found that adoption of generative AI varied significantly across roles and organization size, with larger organizations more likely to be evaluating use cases and leadership reporting higher adoption rates than those in other roles.
Approximately 35 percent of respondents reported not actively considering generative AI as a business solution, while 21 percent are actively evaluating use cases. Just under 20 percent are experimenting with and developing these tools, but have yet to put them into production.
In terms of deployment, 14 percent of respondents indicated that their organizations are in the early stages of AI adoption, characterized as a first solution running in production, while 11 percent state that they are in the mid-stage, with multiple solutions now running in production.
The report also showed that a majority of organizations have increased their generative AI budgets significantly in the last year as awareness around the capabilities and potential of these tools grows.
Of all respondents, just under a quarter indicated that their budgets remained roughly the same, while 34 percent reported a 10-50 percent budget increase, and an additional 22 percent saw a 50-100 percent hike.
Here, those in leadership positions saw the most significant jumps, with 13 percent reporting a 100-300 percent rise and eight percent witnessing budget increases of over 300 percent.
This additional funding is seemingly being used to support the adoption of small, task-specific language models optimized for specific use cases, rather than general-purpose large language models (LLMs).
Over a third of respondents indicated that their organizations are using healthcare- and task-specific language models, whereas only 24 percent and 21 percent reported using open-source LLMs and open-source task-specific models, respectively.
Proprietary LLMs accessed through cloud services and custom models saw lower adoption of 18 percent and 11 percent, while single-tenant or on-premise proprietary LLMs had the least adoption at 7 percent.
The use cases for these technologies center on streamlining clinical workflows and improving patient communication, with 21 percent of participants noting that answering patient questions is the most common use case for these tools within their organizations, followed closely by medical chatbots at 20 percent and information extraction/data abstraction at 19 percent.
When evaluating generative AI tools, accuracy was a top priority for respondents, followed closely by security and privacy risk. Cost was considered the least important factor for all respondents.
Respondents emphasized that lack of accuracy, alongside the potential for major legal and reputational risks, are some of the biggest roadblocks as they pursue generative AI within their organizations.
To address these hurdles, many are investing in model enhancement strategies. The most common step to test and improve these tools noted in the report is human-in-the-loop workflows, which 55 percent of participants reported utilizing at their organizations.
Despite these efforts, the survey also showed that a significant number of generative AI projects have yet to be robustly tested against key requirements such as bias and explainability.
Of all respondents, 32 percent stated fairness was their most commonly tested requirement for LLM solutions, followed by explainability and private data leakage at 27 percent. Roughly a quarter of participants reported testing for bias, hallucinations/disinformation and freshness. Sycophancy and prompt injection were the least tested requirements at eight and 11 percent, respectively.
The recent hype around generative AI across industries has sparked increased interest in the use of these technologies, and healthcare is no exception. But questions around these tools still remain and perceptions around their deployment are ever-evolving.
This week, Wolters Kluwer Health published findings showing that physicians are increasingly optimistic about the use of generative AI in healthcare, with a majority reporting that they are receptive to adopting these models if key concerns around their deployment are addressed.
The survey revealed that 68 percent of respondents felt that they were more likely to think using generative AI in healthcare would be beneficial than they were one year prior to completing the survey. Particularly, respondents saw strong potential for the technology to help overcome challenges related to workforce shortages and clinician burnout.
However, transparency and patient readiness were major concerns for many surveyed.