Rasi Bhadramani/istock via Getty
Training, Privacy Key to Artificial Intelligence Use Post-COVID
Artificial intelligence became a key public health tool during the COVID-19 pandemic, but its continued use will require increased training and privacy protections.
In order for artificial intelligence to act as a public health tool after COVID-19, policymakers will need to increase training and validation of these solutions, as well as enhance data governance and privacy protections, according to a report from the American Association for the Advancement of Science.
When the COVID-19 pandemic first began, leaders sought to leverage the power of artificial intelligence to triage patients, allocate resources, and improve surveillance efforts.
However, AAAS noted that the accelerated development of AI tools during the pandemic has made oversight challenging, if not impossible.
“At the onset of COVID-19, there was a clear demand for using AI to fight the pandemic. However, no one was looking at the entire picture of how AI was in fact deployed and what ethical or human rights questions were arising from their implementation,” said Jessica Wyndham, director of the AAAS Scientific Responsibility, Human Rights and Law Program and a co-author of the report.
“We wanted to see the implications of these selected applications, paying particular attention to underserved populations. We wanted to see what worked, what didn’t and what we could learn from that for any future health crises.”
The report highlights some of the technical and ethical concerns that could come with the use of these AI applications after the pandemic has subsided. Contact tracing applications, for example, hold substantial implications for use in the future.
“Of particular significance from an ethics and human rights perspective are certain details of the implementation of the contact tracing applications, in particular, whether the application uses a centralized database, its broadcasting method, and the nature of participation (mandatory or voluntary),” the report authors stated.
To improve the use of AI-powered contact tracing applications, researchers and developers must ensure the scientific and technical validation of these tools.
“The technical idea behind contact tracing applications is sound and the applications ‘work’ in a way that they can indeed connect in time and space one person to another due to the proximity of their phone (in the example of a Bluetooth-based application),” the report stated.
“The relevant questions though should not be Can these applications identify a close connection between two telephones? but rather, Are the results from these applications, medically useful? In the case of contact tracing applications, there is a significant lack of research that serves to adequately answer that question.”
Before implementation, stakeholders need to measure and make public the actual false positive and false negative rates for identifying people in contact with infected individuals, as well as those who need to quarantine as a result of that contact.
Leaders will also need to quantify and make public the calibration of the correlation between the Bluetooth signal strength and the distance between two telephones.
In addition to contact tracing applications, the use of AI-driven medical triage solutions poses several concerns around bias and training. In particular, gathering enough data necessary to adequately train these AI models is a major obstacle for healthcare stakeholders.
“Although the need for data sharing is understandable, the related privacy and ethical concerns call for a careful balancing act. In addition, the gathering and sharing of health and even biological data without patients’ consent has been historically abused. The issue of data sharing is important as it exposes deep mistrust in government, particularly in African American communities,” the report stated.
“This is important because an algorithm’s calculations will reflect the composition of the training set on which it bases those calculations and, depending on the context, the calculations can lead to erroneous conclusions when applied to datasets with different characteristics than the ones used for training.”
To validate medical triage applications, researchers and developers will need to assess how strong the evidence that the algorithm used to conduct triage is. This will help ensure these tools are actually triaging the patients most affected by the disease.
“The creation and implementation of independent software auditors should address the technical validity of the algorithms used for medical triage and provide an ethical assessment of the tools proposed, to ensure that this technical “certification” is rooted in existing ethical frameworks,” the report said.
The use of AI beyond the pandemic will require significant considerations around the technology and ethics of these solutions.
“The human impacts of the AI-based technologies used in the context of the pandemic are potentially immense, be they at the individual scale in the context of medical triage, or the societal scale in the context of contact tracing,” the report concluded.
“The specific experiences of marginalized populations impacted by these technologies is inadequately documented, a gap that needs to be filled as lessons are drawn from the current crisis and the real potential exists for the continuation or redeployment of these tools in the future.”