AI transparency mandates essential to protect private data

As AI use permeates industry, governments are updating laws to keep pace with the technology. AI transparency will be essential to ensure privacy and avoid risk to civil liberties.

AI is permeating almost every industry in America. AI hardware and software rely on complex algorithms, but the problem is, often, the public doesn't know how these algorithms work or how agencies are using them to make critical decisions that affect us all.

As AI technology becomes more mainstream in areas like law enforcement, automotive applications, smart home control and other municipal purposes, governments all over the world are scrambling to update laws aimed at keeping pace with this emerging technology, while also protecting citizens' privacy. However, many are not yet sure how AI will play out in the long run, and they have adopted a wait-and-see policy rather than update statutes now.

Compliance-mandated AI transparency

The United States is slow to enact any new policy; therefore, individual states have begun to draft their own AI protective covenants. Per most public records laws, the general public has access to governmental documents. However, when a request to see invoices or emails regarding patented technology used to monitor civilians or automate routine tasks falls into the proprietary category, these requests are being denied. The rejection has left some Americans feeling nervous about what types of AI are being used and how.

In 2017, New York City passed a bill and created a task force to investigate "automated decision systems" that affect everything from placing students in public schools to assessing teacher performance, determining where a firehouse should be built and identifying Medicaid fraud. Unfortunately, the investigations yielded little in the way of helpful information. In November 2019, the task force issued a final report and suggested creating a centralized structure for agencies that use AI technologies so they can establish better management practices.

The AI transparency paradox

The problem with transparency concerning AI usage and patented technology is that it may fall under trade secret laws, and those are protected and, in most states, banned from public record. Many law enforcement agencies stress the need for privacy when using AI in the field, or criminals may find ways to circumvent the technology. Another problem facing transparency with AI is that, if the specifics about a particular piece of hardware or complex algorithms are made public, hackers could potentially find ways to take over and control devices. It's a fine line between protecting citizens' rights and being held accountable.

Although some would argue the point, there are negative aspects of AI transparency. Some examples include the following.

AI bias. AI algorithms are only as good as the people who build them, and just about everyone has some biases that are likely to end up in the final AI product. One example would be Microsoft's Twitter chatbot that became racist. It's critically important that those who design the algorithms try to do so without bias and then different human beings monitor and train those algorithms to ensure an unbiased AI.

Risks to fundamental rights and civil liberties. As AI transparency laws are passed and the truth comes out about who programmed these algorithms and how, there is no doubt some applications will find themselves on the wrong side of fundamental rights and civil liberties.

Endangerment. A lot of the AI used in federal, state and local governments is proprietary. If transparency laws describe how the software is programmed, it could potentially pose risks to security and public safety by enabling cybercriminals to infect systems with viruses or malware.

AI certainly helps us in many ways, from suggesting music or movies we might enjoy to learning how we operate a piece of software and automating routine tasks. AI is useful for everyday life and even crime prevention. However, most people take AI for granted and don't question the larger picture.

Privacy. Personal privacy is a topic that has been tossed around a lot lately due to the rise in identity theft, data breaches, ransomware and hacking incidents. The question of privacy is paramount to the transparency issue of AI software and hardware. What information is collected about who and how it is stored, used and shared are things the public deserves to know.

Public records. Currently, with the way public records laws are structured, most documents and references to AI algorithms and devices are not available. Therefore, the public is left in the dark about how government agencies, law enforcement and the private sector use AI to assess situations and make policy decisions that potentially affect human lives. Obviously, this needs to change so that these custodians are made accountable to the public who pays the taxes for their services.

The question of how far to go with transparency laws concerning AI technologies is not a simple one. New York's law may set the stage for other states to implement more accountability into AI systems and open lines of communication between those who use AI and those who are served by it.

About the author

Ben Hartwig is a chief security officer at InfoTracer. He authors guides on marketing and cybersecurity posture.

Dig Deeper on Risk management and governance