SFIO CRACHO - stock.adobe.com

Senator Calls on Google to Ensure Ethical Deployment of Health AI

In a letter to Google CEO Sundar Pichai, US Sen. Mark R. Warner raises concerns about transparency, patient privacy, and ethical guardrails related to Med-PaLM 2.

This week, United States Senator Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, sent a letter to Google CEO Sundar Pichai citing concerns about the company’s deployment of Med-PaLM 2, a large language model (LLM) tuned for use in healthcare.

In April, Google shared that Med-PaLM 2 would be available on a limited basis for select healthcare customers to test, during which time users would provide feedback on the model and its potential healthcare use cases.

The announcement came amid significant hype surrounding generative artificial intelligence (AI). Healthcare has not been immune to this hype, but many clinicians and researchers have come forward to call for a measured, cautious approach to applying the technology in clinical settings.

Warner’s letter highlights some of the most common concerns about the use of generative AI in healthcare, calling for increased transparency, patient privacy protections, and ethical guardrails.

“While [AI] undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” Warner wrote.

In June, Google Cloud and Mayo Clinic launched a collaboration aimed at transforming healthcare with generative AI. In July, Google Cloud partnered with healthcare technology company CareCloud to help bolster operational efficiency and digital transformation for small- and medium-sized healthcare providers using the technology.

In his letter, Warner critiques these efforts, alleging that Google and other large technology companies are racing to develop and deploy healthcare AI models in an attempt to capture market share following the launch of OpenAI’s ChatGPT, taking significant risks in the process.

“Numerous media outlets have reported that companies like Google and Microsoft have been willing to take bigger risks and release more nascent technology in an effort to gain a first mover advantage,” the letter continues. “In 2019, I raised concerns that Google was skirting health privacy laws through secretive partnerships with leading hospital systems, under which it trained diagnostic models on sensitive health data without patients’ knowledge or consent. This race to establish market share is readily apparent and especially concerning in the health care industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in health care institutions in recent years, and the sensitivity of health information.”

Warner further expressed concerns about inaccuracies in the Med-PaLM 2 model, where the tool is being tested, the data sources being used in its training and testing, what steps Google has taken to ensure the model doesn’t perpetuate bias, and to what extent patients are aware of and can refuse the use of AI in their care.

Based on these concerns, the letter requests answers to 12 questions from Pichai and Google. These queries touch on what data sources Med-PaLM 2 learns from, patient consent and agency, transparency around how the tool is being rolled out, privacy protections, what guardrails are in place to prevent over-reliance on the model’s outputs, and which hospitals are using Med-PaLM 2.

“It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI,” the letter stated.

Next Steps

Dig Deeper on Health data governance