Getty Images/iStockphoto
Coalition Shares Progress, Plans to Issue Responsible Health AI Guidelines
The Coalition for Health AI published papers defining core principles of responsible artificial intelligence testing, usability, and safety as part of its work on mitigating bias and promoting equity.
The Coalition for Health AI (CHAI) has published papers related to the second round of workshops in its Bias, Equity, and Fairness series for public feedback and announced plans to share recommendations for responsible artificial intelligence (AI) use in healthcare.
CHAI launched earlier this year as an initiative for health systems, organizations, AI and data science experts, and other healthcare stakeholders to advance AI in healthcare while addressing health equity and bias, according to the press release. The coalition serves to identify where standards, best practices, and guidance need to be developed for AI-related research, technology, and policy.
“Application of AI brings a tremendous benefit for patient care, but so is its potential to exacerbate inequity in healthcare,” said John Halamka, MD, president of Mayo Clinic Platform and cofounder of the coalition, in the press release. “The guidelines for ethical use of an AI solution cannot be an afterthought. Our coalition experts share commitment to ensure patient-centered and stakeholder-informed guidelines can achieve equitable outcomes for all populations.”
CHAI’s membership currently includes Change Healthcare, Duke AI Health, Google, Johns Hopkins University, Mayo Clinic, Microsoft, MITRE, Stanford Medicine, University of California (UC) Berkeley, and UC San Francisco.
Earlier this year, the coalition hosted a two-day workshop consisting of presentations, group discussions, and breakout sessions on the topics of Health Equity by Design; Bias and Fairness Processes and Metrics; and Impacting Marginalized Groups: Mitigation Strategies for Data, Model, and Application Bias. CHAI summarized these in a topic paper published for public input.
The second round of workshops in CHAI’s Bias, Equity, and Fairness series focused on Testability, Usability, and Safety; Transparency; and Reliability and Monitoring. The associated topic paper is available now. CHAI is calling for public feedback on it until October 14. The coalition will convene in mid-October to review the feedback and finalize its responsible AI in healthcare framework. CHAI aims to share its recommendations by the end of the year.
These announcements come a week after the White House unveiled its Blueprint for an AI Bill of Rights, which identifies five guidelines for the design, use, and deployment of automated and AI-based tools to protect Americans from harm as the use of these technologies continues to grow in multiple industries, including healthcare.
“It is inspiring to see the commitment of the White House and U.S. Department of Health and Human Services towards instilling ethical standards in AI,” Halamka noted. “As a coalition we share many of the same goals, including the removal of bias in health-focused algorithms, and look forward to offering our support and expertise as the policy process advances.”