
mast3r - stock.adobe.com
Most patients do not trust health systems to use AI responsibly
Trust in health systems to use AI responsibility and protect them from AI harm is low among patients, indicating the need to improve health system trustworthiness.
New research shows that patients have low trust in health systems' ability to use AI responsibly and protect patients from AI-related harms.
The study, published in JAMA Network Open, comes amid uncertainty in AI regulation, with President Donald Trump rescinding an executive order mandating safe and trustworthy AI development and AI staff cuts at the FDA. Healthcare stakeholders are increasingly taking steps to self-regulate health AI development, including through consortiums like the Trustworthy & Responsible AI Network (TRAIN).
The study authors from the University of Minnesota School of Public Health noted that understanding patient perspectives is critical for designing trustworthy AI systems. They used data from a survey conducted among the National Opinion Research Center's probability-based AmeriSpeak Panel from June to July 2023. They examined responses to survey questions asking whether patients trusted their health system to use AI responsibly and ensure that an AI tool would not harm them.
Of the 2,039 respondents included in the study, 51.2% were female, 63.1% were white, 17.4% Hispanic, 12.1% Black and 4.9% Asian.
Researchers found that the mean score for general patient trust in health systems was 5.38 on a scale of zero to 12, with 12 indicating the highest trust. Most respondents reported low trust in their health system's ability to use AI responsibly (65.8%) and ensure that an AI tool would not harm them (57.7%).
Further, researchers found that respondents with higher health system trust were more likely to believe that their system would use AI responsibly and protect them from AI harm. Predictably, patients experiencing discrimination while seeking care were less likely to trust that health systems are using AI responsibly and protecting patients from harm.
Female respondents were also less likely to trust their system to use AI responsibly than male respondents; however, there was no difference between the sexes in trusting that systems would protect them from AI-related harms.
Notably, there was no association between health literacy or AI knowledge and trust in healthcare systems using AI.
"Future work should examine this trust longitudinally and include additional validated measures of factors such as patient comfort, familiarity and experience with AI that could be associated with the outcome," researchers concluded. "Low trust in healthcare systems to use AI indicates a need for improved communication and investments in organizational trustworthiness."
The research adds to our understanding of patient trust in health AI technology amid rapid advancement.
In a 2024 Athenahealth/Dynata poll, 44% of respondents said their trust in health AI depends on its use. For instance, 40% said they'd like to see AI provide some diagnostic support to providers, while only 17% said they'd like to see AI augment or replace patient-provider interactions.
Additionally, most want safeguards for AI use in healthcare, with 57% saying the government should issue laws and regulations advising providers on how to use AI. Another 50% said similar regulations should apply to health technology companies.
Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.