Getty Images

Google Health shares generative AI, health equity research updates

Google Health detailed recent advances in its generative AI research, alongside efforts to improve health equity and disease detection.

Today at Google’s annual health event, The Check Up, the tech company shared updates regarding its artificial intelligence (AI) research, alongside its efforts to improve health equity and health information accessibility.

At last year’s event, Google Health introduced its large language model (LLM) Med-PaLM 2, a tool fine-tuned for the medical domain. In December, the company announced the launch of MedLM, a suite of foundation models that utilize generative AI to help healthcare organizations meet their needs and goals.

Today, Google shared plans to incorporate new modalities into existing models, research versions of the Gemini model fine-tuned for healthcare, build a personalized health LLM and test the assistive capabilities of the Articulate Medical Intelligence Explorer (AMIE) model.

At HIMSS24 last week, the company announced that two new capabilities would be added to MedLM: MedLM for Chest X-ray and a task-specific application programming interface (API) called Condition Summary. At The Check Up, leadership shared that these initiatives are part of larger efforts to make healthcare generative AI multimodal and more effective.

This work is closely tied to ongoing research into Gemini – formerly Bard – the company's multimodal LLM. Google indicated that researchers are currently investigating how a healthcare-tuned version of the model will perform on complex tasks like X-ray and brain scan report generation.

Google Research is also partnering with Fitbit to develop a Personal Health Large Language Model built on Gemini. The new tool is designed to deliver personalized health and wellness coaching to users based on data collected from each individual’s Fitbit and Pixel devices.

Finally, Google will be pursuing research to better understand the potential of AMIE, the company’s research LLM optimized for clinical and diagnostic reasoning. To date, the model has ranked highly in terms of diagnostic accuracy, empathy and helpful explanation in simulated text-based consultations with patient actors. Moving forward, AMIE will be tested within a healthcare organization to gauge its utility for clinicians and patients.

Google Health also detailed its health equity efforts at The Check Up.

This week, the company published a research paper outlining a framework to help stakeholders determine whether a medical LLM is perpetuating biases. To that end, the paper also provides seven adversarial testing datasets known as EquityMedQA.

Google researchers have also built a framework for AI development to prevent the creation and reinforcement of bias in models. The Health Equity Assessment of Machine Learning performance (HEAL) works by evaluating the likelihood that an AI tool will perform equitably, which could prevent inequitable models from being deployed.

The company also released the Skin Condition Image Network (SCIN) in partnership with Stanford Medicine, an open-access dataset containing real-world dermatology images designed to provide a more diverse set of images reflecting various skin tones, ages, settings and condition severities.

Google further shared that it would advance its partnership with Apollo Radiology International in India, which aims to use AI to expand screening and improve early disease detection for conditions like breast cancer, lung cancer and tuberculosis (TB).

In addition to these announcements, The Check Up also highlighted Google’s work to help people access health information.

Leadership shared that a new capability for Google Lens unveiled last year – which allows users to take pictures of their skin and search for visually-similar matches on the web – is now available in over 150 countries worldwide. The company is also working to make more high-quality healthcare images, like diagrams, available in visual search results to help make health information more accessible.

The company indicated that YouTube will continue its efforts to provide authoritative health-related content on the platform by using the AI dubbing tool Aloud to translate videos related to first-aid and chronic conditions from institutions like Mass General Brigham and Stanford Medicine.

Google’s announcements come as AI regulation in healthcare and life sciences remains a gray area in the United States.

Experts from the Pistoia Alliance, a non-profit that engages major pharmaceutical companies in pre-competitive collaboration, recently discussed how survey findings that indicate only nine percent of life sciences professionals understand AI regulations underscore ongoing concerns around the deployment of these advanced technologies.

Next Steps

Dig Deeper on Artificial intelligence in healthcare