AI for finance adoption affected by legal, ethical issues
Some experts say that for the foreseeable future, machine learning algorithms and other AI technologies will require human oversight for finance services and data privacy.
The short answer to the longstanding question about whether the use of artificial intelligence in business could create intractable ethical, legal and regulatory issues just might be: It depends on the humans.
Businesses have started using AI for finance applications that are powered by machine learning and deep learning processes, but their embrace of AI technology begs the question of whether computers will follow the same standards and principles that humans are expected to follow. For some in financial services and technology, the answer does indeed boil down to how people program and oversee AI tools.
"One key aspect of artificial intelligence is that it is just technology, which is an extension of human practices and thinking," said Lex Sokolin, global director of fintech strategy at Autonomous Research in London. "It may automate and make faster certain decision-making -- for example, requiring one person [assisted] by AI to do the work of 50 people previously -- but that decision-making will reflect all the biases and mistakes of human society."
Using AI for finance holds the potential of having the computer find valuable information that will allow companies to see patterns previously unseen by data scientists and use that insight to make more informed decisions and pursue bold strategies. Using algorithms and machine learning, AI-powered tools "learn" from "experience" what their human programmers want them to learn.
For businesses that need help making sense of trends affected by the requirements of financial regulations, using AI for finance could be a boon, allowing computers to do the work of scores of employees. But the flip side of relying on AI for finance regulations is that the technology could fail to discern legal or ethical issues that a human would otherwise recognize. This means for now, with mainstream AI technology still at a foundational level, human programmers will need to stay on top of how a machine is learning to ensure that boundaries aren't crossed.
"The European data privacy rules, GDPR, speak about the 'right to an explanation,'" said Gartner research fellow Frank Buytendijk, who specializes in digital ethics. "But with some types of machine learning, using stacks of neural networks, it is not always easy or even possible to fully retrieve where a decision comes from. We just need to trust it works in a way. This will need some attention figuring it out."
Need for human oversight puts limits on using AI for finance
Referring to autonomous vehicles and systems, Buytendijk said that for the first time in the digital era, the potential ethical repercussions of a new technology are being discussed openly before widescale implementation.
"It is a bit of a paradox that one would want to be careful with exposing AI to the real world until you get it right, but that AI needs to be in the real world for all the data [it needs] to learn. This problem hasn't been solved yet." He added: "As we figure out how machine learning works, maybe as consumers we should be more tolerant as well, and give the market some time and space in order to get it right."
Frederic Laluyaux, CEO of the business intelligence software company Aera, believes that for the foreseeable future, AI technologies will help with black-and-white decisions, but "the shades of gray will be decided by humans."
The need for a human touch could complicate AI's promise to autonomously analyze unstructured data that is interwoven with complex laws and regulations. To ensure that companies stay above board, a human should be kept in the loop, said Sokolin of Autonomous Research. "That means automated technology should have a human copilot that can course-correct what the algorithm is doing," he said.
Businesses should rely on vendors for guidance on how to use AI for financial services, but they also need to learn on their own how to audit the software to understand how it works and see the implications of its decisions, Sokolin said.
"AI needs parental oversight," said Gartner's Buytendijk. "We don't have to be afraid that the robots will wipe us all out, or big dystopian visions like that. But we need to make sure that applications that use AI techniques have some kind of rollback or override as part of the reinforcement learning process. And we need to make sure we arrange accountability and responsibility -- and ownership of the data and the algorithms. In short, just get our digital governance right."