Oleg Blokhin - Getty Images

AMA Establishes New Principles for AI Development, Deployment and Use

The American Medical Association has issued seven principles to support the development of equitable and responsible healthcare AI.

The American Medical Association (AMA) issued new principles to guide the development, deployment and use of augmented intelligence (AI) in healthcare, according to a press release shared with HealthITAnalytics.

The principles are designed to build upon existing policy on AI and bolster efforts to establish a governance structure for these technologies as they continue to advance.

The guidance also serves as an important component of the AMA’s advocacy strategy around health AI, which prioritizes the implementation of national governance policies to ensure that these tools are transparent, responsible, equitable, and ethical.

“The AMA recognizes the immense potential of health care AI in enhancing diagnostic accuracy, treatment outcomes, and patient care,” said AMA President Jesse M. Ehrenfeld, MD, MPH, in the press release. “However, this transformative power comes with ethical considerations and potential risks that demand a proactive and principled approach to the oversight and governance of health care AI. The new AMA principles will guide the organization’s engagement with the administration, Congress and industry stakeholders in discussions on the future of governance policies to regulate the development, deployment and use of health care AI.”

The seven principles, which were approved earlier this month by the AMA Board of Trustees, focus on mitigating risks to patients and clinicians while maximizing the benefits that AI could provide in healthcare.

The first principle is concerned with oversight, noting that the AMA recommends “a whole of government” approach to spur the implementation of health AI governance policies. However, this principle also indicates that non-government stakeholders can play an important role in oversight and governance.

The second principle focuses on transparency, which the AMA states is essential to establish the trust necessary to leverage health AI successfully. To this end, the principle dictates that transparency around information related to the design, development and deployment of these tools—such as possible sources of inequity—be mandated by law.

The third principle outlines how disclosure and documentation should be handled in the context of health AI, suggesting that there should be appropriate communication and documentation whenever these tools may directly impact patient care, care access, clinical decision making, and other aspects of care.

The fourth principle details the AMA’s approach to generative AI. This principle directs healthcare organizations to develop and adopt generative AI policies that anticipate and help manage the risks associated with the technology prior to its use.

The fifth principle focuses on privacy and security, building on existing Privacy Principles set forth by the AMA. The AMA emphasizes that patient privacy and data security must be a top priority in the development and deployment of health AI, stating that the developers of these tools must build with privacy in mind and that safeguards to protect against cybersecurity threats must be implemented.

The sixth principle details how bias must be mitigated in health AI to ensure equitable outcomes for patients. The AMA recommends that bias in these algorithms be proactively identified and mitigated to promote health equity.

The final principle discusses liability, indicating that the AMA plans to advocate “to ensure that physician liability for the use of AI-enabled technologies is limited and adheres to current legal approaches to medical liability.”

The principles also seek to address the responsible use of AI by payers for activities like benefit design, claim determinations, and coverage limit determinations. The press release states that the AMA is in favor of stronger regulatory oversight when payers use AI for these purposes.

Specifically, the AMA asserts that these technologies should not systematically withhold care from certain groups or reduce access to necessary care. Further, the press release notes that when payers are utilizing AI tools, efforts to ensure that these systems don’t eliminate human review of individual circumstances or override clinical judgment.

As healthcare AI advances and evolves, but regulation and policy efforts lag, healthcare organizations, government entities, and researchers are working to establish best practices for AI development and use.

In a recent interview, Vijaytha Muralidharan, MBChB, MRCP, a clinical AI researcher in the Department of Dermatology at Stanford University, sat down with HealthITAnalytics to detail how her team developed the ACCEPT-AI framework, a set of recommendations to guide pediatric data use in health AI research.

During the conversation, Muralidharan discussed the challenges of pediatric data use, how she and her team developed ACCEPT-AI for use by researchers and regulators, and how the work could guide the development of similar principles for other vulnerable populations.

Next Steps

Dig Deeper on Health data governance