Getty Images/iStockphoto
WHO Releases Governance Guidelines for Generative Artificial Intelligence
New guidance from the World Health Organization outlines how stakeholders can ensure the appropriate use of large multi-modal AI models in healthcare.
The World Health Organization (WHO) has released new guidance on the governance of large multi-modal models (LMMs) – a type of generative artificial intelligence (AI) – in healthcare.
The publication contains over 40 recommendations for governments, technology companies, and providers to ensure that healthcare LMMs are used appropriately.
“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” stated Jeremy Farrar, WHO Chief Scientist, in a news release. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.”
WHO’s guidelines describe five potential applications for LMMs in healthcare: diagnosis and clinical care, including responding to patient messages; patient-guided use, such as using LMMs to explore symptoms and treatments; administrative and clerical tasks, like summarizing patient visits in the electronic health record (EHR); medical education, such as simulating patient encounters for students; and scientific research, including drug discovery and development.
However, the guidance also highlights that stakeholders must consider LMMs’ potential alongside their risks. WHO emphasized that LMMs and other types of generative AI have been shown to produce inaccurate, false, incomplete, and biased outputs if they are trained on poor-quality data.
These risks can lead to significant patient harms and adverse outcomes if not mitigated effectively. The recommendations also underscore other potential risks that LMMs in healthcare pose, such as accessibility, affordability, cybersecurity issues, and automation bias.
To combat these risks, the WHO guidance calls on healthcare providers, governments, technology companies, patients, and other stakeholders to engage with one another to guide the development and deployment of healthcare LMMs.
“Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs,” said Alain Labrique, WHO Director for Digital Health and Innovation in the Science Division.
The guidelines posit that governments have the primary responsibility for the governance of these technologies, outlining multiple recommendations government stakeholders should take to responsibly integrate LMMs into healthcare.
The first involves providing not-for-profit or public infrastructure for use by developers across the public, private, and not-for-profit sectors to build AI models. However, this infrastructure must require users to adhere to ethical standards in exchange for access.
Another principle describes the use of policy and regulation to guarantee that healthcare LMMs and associated applications meet human rights standards and ethical obligations related to patient privacy, autonomy, and dignity.
The guidance also encourages governments to assign a regulatory agency to evaluate and approve LMMs intended for use in healthcare.
The publication further recommends independent third-party auditing and assessment of healthcare LMMs when they are deployed on a large scale to ensure that governance obligations are met. The results of these assessments should include information about the impacts the LMM has on patients – disaggregated by user type – and be made publicly available.
The guidance also outlines recommendations for LMM developers.
Some of these guidelines involve including all direct and indirect stakeholders – healthcare professionals, patients, medical researchers, and other potential users – into the early stages of AI development to improve transparency and provide opportunities for feedback.
Others are concerned with ensuring that LMMs perform their tasks with the reliability and accuracy needed to positively impact healthcare outcomes. To do so, the WHO indicates that technology developers should be able to both predict and understand any potential secondary outcomes related to model deployment.
AI governance is an ongoing conversation among healthcare stakeholders, as the significant potential benefits of these tools come alongside comparable risks. To address this, a host of healthcare organizations are working to develop robust governance strategies.
In a recent interview with HealthITAnalytics, Michael McGinnis, the Leonard D. Schaeffer executive officer of the National Academy of Medicine (NAM), discussed the organization’s Artificial Intelligence Code of Conduct (AICC) and its piece in the current health AI governance puzzle.