Getty Images
Healthcare leaders launch Trustworthy & Responsible AI Network
The newly formed Trustworthy & Responsible AI Network aims to advance the quality and safety of artificial intelligence in healthcare.
Healthcare leaders came together recently to launch the Trustworthy & Responsible AI Network (TRAIN), a consortium created to explore and set standards for the safe application of artificial intelligence (AI) in health care.
As AI advances, proponents emphasize that the technology’s capabilities could transform the healthcare industry by improving efficiency, reducing costs and enhancing care delivery. But before AI can support improved health outcomes, development and evaluation standards must be created.
TRAIN is designed to help operationalize these standards to ensure that healthcare AI is both responsible and effective.
Members of the consortium will drive these efforts by sharing best practices – such as guidance around algorithm monitoring and management – for AI in healthcare; enabling the registration of clinical AI via an online portal; providing tools to measure the impact of health AI deployment and guiding the creation of a federated national outcomes registry to house insights into the safety and efficacy of these models.
Data and algorithms used by member organizations will not be shared among members or with any third parties, according to a Microsoft news release.
The network brings together healthcare and technology leaders from across the United States: AdventHealth, Advocate Health, Boston Children’s Hospital, Cleveland Clinic, Duke Health, Johns Hopkins Medicine, Mass General Brigham, MedStar Health, Mercy, Mount Sinai Health System, Northwestern Medicine, Providence, Sharp HealthCare, University of Texas Southwestern Medical Center, University of Wisconsin School of Medicine and Public Health and Vanderbilt University Medical Center, alongside Microsoft, OCHIN and TruBridge.
“When it comes to AI’s tremendous capabilities, there is no doubt the technology has the potential to transform healthcare. However, the processes for implementing the technology responsibly are just as vital,” said David Rhew, MD, global chief medical officer and vice president of healthcare at Microsoft, in the release. “By working together, TRAIN members aim to establish best practices for operationalizing responsible AI, helping improve patient outcomes and safety while fostering trust in healthcare AI.”
Members further indicated that collaboration allows TRAIN to effectively meet the challenges associated with AI in healthcare.
“Even the best healthcare today still suffers from many challenges that AI-driven solutions can substantially improve. However, just as we wouldn’t think of treating patients with a new drug or device without ensuring and monitoring their efficacy and safety, we must test and monitor AI-derived models and algorithms before and after they are deployed across diverse healthcare settings and populations, to help minimize and prevent unintended harms,” stated Peter J. Embí, MD, MS, FACP, FACMI, FAMIA, FIAHSI, professor and chair of the Department of Biomedical Informatics (DBMI) and senior vice president for research and innovation at Vanderbilt University Medical Center. “It is imperative that we work together and share tools and capabilities that enable systematic AI evaluation, surveillance and algorithmvigilance for the safe, effective and equitable use of AI in healthcare. TRAIN is a major step toward that goal.”
These efforts reflect ongoing work from national stakeholders to guide AI deployment in healthcare.
In a January interview with HealthITAnalytics, Michael McGinnis, MD, MA, MPP, the Leonard D. Schaeffer executive officer of the National Academy of Medicine (NAM), discussed how the NAM Artificial Intelligence Code of Conduct (AICC) could help establish a national architecture to support responsible health AI use.