Getty Images
AI Voice Assistants Insufficient for Layperson CPR Instructions
Researchers have demonstrated that layperson CPR directions provided by artificial intelligence voice assistants were often inconsistent or irrelevant.
Researchers from Mass General Brigham, New York’s Albert Einstein College of Medicine, and Boston Children’s Hospital demonstrated that artificial intelligence (AI) voice assistants frequently provided low-quality layperson cardiopulmonary resuscitation (CPR) instructions in a recent study published in JAMA Network Open.
Layperson CPR is associated with a two- to four-fold increase in survival, and many out-of-hospital cardiac arrest cases have layperson CPR performed, the research team indicated. Emergency dispatchers can provide bystanders with CPR directions, but this is not always feasible because of limited availability of emergency services, language barriers, poor audio quality, call disconnection, fear of law enforcement, and perceived costs.
AI voice assistants are becoming more common in the United States, and some are being used for healthcare needs. Therefore, the researchers hypothesized that these tools could act as a source of accessible CPR instructions.
To test this, the research team evaluated the out of the box abilities of Amazon’s Alexa, Apple’s Siri, Google Assistant’s Nest Mini, and Microsoft’s Cortana, alongside the large language model (LLM) ChatGPT, to provide appropriate CPR directions.
For each tool, the researchers posed eight verbal or written queries related to how to perform CPR. The quality of each response was then rated by two board-certified emergency medicine physicians.
Out of 32 total responses, 59 percent were related to CPR, and 28 percent suggested calling emergency services.
Approximately 34 percent of responses provided any verbal or textual CPR directions, but only 12 percent provided verbal instructions.
Conversely, ChatGPT provided relevant information for 100 percent of queries and textual CPR directions for 75 percent of queries.
Of the 17 responses that gave CPR instruction, 71 percent detailed hand positioning, 47 percent indicated compression depth, and 35 related compression rate.
Nearly half of all responses were unrelated to CPR and “grossly inappropriate,” the researchers stated. The findings also highlighted that laypeople using a voice assistant for CPR instruction may fail to find appropriate information or experience care delays as a result.
The research team indicated that bystanders should prioritize contacting emergency services over relying on a voice assistant where possible.
The researchers also suggested that voice assistants have potential to help provide CPR directions in the absence of emergency services, but stated that the tools need to be improved significantly.
“[Voice assistants] need to better support CPR by: (1) building CPR instructions into core functionality, (2) designating common phrases to activate CPR instructions, and (3) establishing a single set of evidence-based content items across devices, including prioritizing calling emergency services for suspected cardiac arrest,” the authors explained. “The technology industry could partner with the medical community and professional societies to standardize [voice assistant] support for CPR instruction.”
While AI voice assistants may not be ready to help provide CPR guidance, they do show promise in other healthcare use cases.
In 2021, research from Klick Applied Sciences evaluated the performance of multiple voice assistants when asked to provide medication information on the 50 most dispensed brand and generic medication names in the US.
The study was a follow-up to 2019 research assessing each tool.
The results showed that Google Assistant provided the most accurate search results for drug names.
However, between the two study periods, Google Assistant’s accuracy remained mostly stable, while Siri and Alexa improved significantly.