studiostoks - Fotolia

AI might not have rights, but it could pay taxes

Tax, liability and patent laws can't handle AI systems, which have grown steadily smarter. As AI becomes ubiquitous, the legal system may need to change to accommodate it.

Artificial intelligence systems shouldn't have rights, but they might have to pay taxes.

That's according to Ryan Abbott, professor of law and health sciences at the University of Surrey in Guildford, England.

During a virtual panel discussion on AI rights at Washburn University School of Law's symposium on the topic, Abbott said that while AI systems now "do the sorts of things people used to do," they don't have consciousness or morals, and thus don't deserve rights.

That thinking may change if researchers ever develop artificial general intelligence (AGI), also known as strong AI. AGI is a hypothetical machine capable of learning as well as a human.

Experts debate whether people could build a system like that. Still, if AGI were created, it could deserve humanlike rights, noted David Opderbeck, professor of law and co-director of the Gibbons Institute of Law, Science & Technology at Seton Hall University School of Law in Newark, N.J.

That's not to say governments shouldn't subject current AI systems to laws, however.

Taxes and patents

Tax laws, for example, don't currently take automated workers into account. While human employees contribute payroll and income taxes, an automated "employee" doesn't, Abbott noted.

Governments could lose out on quite a bit of income tax as AI becomes more prevalent and possibly displaces more human workers. Granted, that argument only works if displaced employees don't find other jobs. Abbott predicted that that may happen as AI becomes smarter at a rate that outpaces people's ability to learn new skills or find job training.

"Automation threatens our tax revenue," Abbott said, noting that the biggest sources of federal tax revenue in the U.S. are income and payroll taxes.

AI regulation, AI rights

If automated workers overwhelmingly displace human workers, governments may need to reduce or eliminate their reliance on income and payroll taxes instead of something else or find a way to tax enterprises based on their use of automated employees.

While it could in some sense be taxed, possibly, AI also could benefit from patent and copyright protection. As AI becomes more intelligent and capable of creating original, or at least somewhat original, content, Abbott argued that eventually, copyrights and patents could be awarded to AI or their creators.

That's off in the future, though. The U.S. Patent and Trademark Office ruled earlier this year that AI systems cannot be credited as an inventor in a patent.

Whether AI systems can own a copyright is also up for debate, although the 2018 ruling by a three-judge panel of the U.S. Court of Appeals for the Ninth Circuit that Naruto, a macaque monkey that took a selfie, didn't own the copyright to his image indicated that AI systems cannot own copyrights.

Their creators, however, should be able to, Abbott said.

Automation threatens our tax revenue.
Ryan AbbottProfessor of law and health sciences, University of Surrey

"It is the right solution to allow AI-generated works to be protected," he said.

Meanwhile, governments should consider putting a system in place to hold AI creators responsible if their creations cause harm, said Bryan Choi, assistant professor of law at The Ohio State University Moritz College of Law.

Many AI systems are closed systems that sometimes even their creators can't see or understand. That makes it extremely difficult or impossible to understand why a system made a particular decision.

For example, such a system can be dangerous in the case of autonomous cars. If the vehicle "chooses" to veer off the road, owners may never know why or how to prevent that from occurring in the future.

If an AI system malfunctions, then, the blame shouldn't be on the AI or the user, he said. Instead, governments should create a system to hold developers responsible.

Because AI developers aren't subjected to strict regulation at the moment, and AI systems can be challenging to look into, governments should open the possibility of users suing developers for malpractice if their AI systems cause harm, Choi argued.

The Washburn Law Journal symposium "Artificial Rights?" was held virtually on Nov. 5.

Patricia Judd, professor of law at the Washburn University School of Law, in Topeka, Kansas, moderated the panel.

Dig Deeper on AI business strategies