Getty Images
Patient Privacy Concerns Emerge Over Facial Recognition Tech
Surveyed patients expressed concerns over data governance, usage, and patient privacy relating to facial recognition technology being used in medical research.
A person’s face is their most publicly-identifiable feature, which explains why people may be wary of facial recognition technology’s healthcare functions in terms of patient privacy, data governance, and usage.
Over 70 percent of over 4,000 surveyed patients indicated that they were “very” or “somewhat” concerned about data privacy when asked about a hypothetical precision health study that would utilize facial recognition technology to garner insights, according to a study published in PLOS One.
Facial recognition technologies can be used to diagnose certain medical conditions, assist patients with touch-free appointment check-ins, and deter healthcare fraud. DNA, EHR data, and biometric data, including facial imaging, are valuable assets in medical research, but patient privacy concerns are justified.
“Understandably, many in and out of the precision health research community wonder whether the measures taken to ensure responsible stewardship of facial imaging and imaging-derived data are appropriate and adequate,” the study explained.
Over 55 percent of survey respondents were equally concerned about the privacy of medical records, DNA, and facial images collected for precision health research.
While 39 percent of respondents said they would still be willing to participate in medical research involving that data, 30 percent of respondents said they were unwilling to participate due to privacy concerns, and an additional 30 percent were unsure.
A quarter of respondents said they would opt out of the DNA component of the study, and 22 percent said they would opt out of the DNA and facial imaging components. Some were even willing to pay to opt out of data collection.
When asked about three commonly used data governance practices, no option elicited a majority response. Respondents were asked to consider an “open science” scenario in which research resources were made available to as many researchers as possible, even if that meant additional risks to patient privacy.
The “gated science” governance method would make the resources available to only qualified researchers. A third option, identified as “closed science,” would make the data available to very few researchers, enabling the most patient privacy but limiting the data’s potential benefits to the medical community.
The “gated science” method was the most popular method, and the “open science” method was the least popular. Still, no option was overwhelmingly preferred or made patients feel particularly secure about their data privacy.
“While facial imaging has been an important part of specialized care (particularly pediatric genetics) for decades, biomedicine is changing–becoming much more anticipatory, occurring in digital spaces, involving larger and larger datasets containing data assets repurposed from diverse sources, and introducing new risks,” the study explained.
“While the U.S. public seems to have fairly high levels of trust in healthcare providers and researchers with use of facial imaging, we found that people are not necessarily fully supportive of expanded uses in healthcare settings and might even be uneasy about them. How healthcare professionals choose to proceed could have substantial impacts on levels of public trust in the profession generally.”
The findings indicated that patient perspectives relating to facial imaging vary widely, especially as the technology advances. Facial recognition technology shows promise for improving diagnoses and convenience within healthcare, but clinical researchers will have to overcome patient privacy barriers in order to gain valuable insights and ethically protect patient data.