Getty Images/iStockphoto

White House Unveils Artificial Intelligence Bill of Rights

The White House has unveiled a 'blueprint' for an AI Bill of Rights, which outlines five protections Americans should have in the face of the rising use of these technologies.

The White House unveiled its Blueprint for an AI Bill of Rights earlier this week, which identifies five guidelines for the design, use, and deployment of automated and artificial intelligence (AI)-based tools to protect Americans from harm as the use of these technologies continues to grow in multiple industries.

The blueprint outlines five core principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. These are intended to serve as practical guidance for the US government, tech companies, researchers, and other stakeholders, but the blueprint is nonbinding and does not constitute regulatory policy.

The guidelines apply to AI and automated tools across industries, including healthcare, and are part of a larger conversation around the ethical use of AI.

Under the five principles, Americans should be protected from unsafe and ineffective systems; not face discrimination by algorithms, which should be used and designed in an equitable way; and be protected from abusive data practices via built-in safeguards and have agency over how their data is used.

Americans should also know when, how, and why an automated system is being used to contribute to outcomes that impact them and be able to opt out of these systems in favor of a human alternative who can help remedy their problems, where appropriate.

The blueprint also provides a framework for application to all automated systems that have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services. Healthcare falls under access to resources or services within this framework, meaning that the framework's protections would apply to less technologically advanced tools that may not utilize AI at all.

“Considered together, the five principles and associated practices of the Blueprint for an AI Bill of Rights form an overlapping set of backstops against potential harms,” the document concludes. “This purposefully overlapping framework, when taken as a whole, forms a blueprint to help protect the public from harm. The measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm, or risk of harm, to people’s rights, opportunities, and access.”

Ethical AI has become a major topic of conversation in recent years, particularly in the healthcare space, as AI and automated tools have become more common in medical research and clinical settings. However, the overarching conversation about medical devices and technologies having the potential to cause harm isn’t new.

In September, the FBI found that unpatched and legacy medical devices, many of which do not use AI, can negatively impact a healthcare facility’s operational functions, patient safety, and data security. 

AI tools can create similar issues. In an episode of Healthcare Strategies from earlier this year, Linda Malek, partner at Moses & Singer and chair of the firm’s healthcare, privacy, and cybersecurity practice group, discussed how healthcare AI technologies have various clinical applications but also pose a risk in terms of data privacy and security. These technologies also have the potential for algorithmic bias.

The Health Sector Cybersecurity Coordination Center (HC3) has also chimed in on this topic, outlining the cybersecurity implications of emerging technologies such as AI, 5G, and smart hospitals last month.

Despite these concerns, some stakeholders are working to make these technologies ethical and equitable.

Researchers are driving much of the innovation in this area, with a team from the Rand Corporation proposing a framework to ensure equitable algorithms in August and investigators at MIT’s Jameel Clinic focusing on applying AI and machine learning to improve healthcare in ways that are robust, private, and fair.

Other experts are advocating for an evidence-based AI development and deployment approach.

Next Steps

Dig Deeper on Health data governance