Bot security through AI openness

To ensure consumers interact with secure bots and AI tools, vendors like IBM and BotChain propose more AI transparency and have designed ways to achieve that.

These days, it's easy to take advantage of AI systems like chatbots, virtual assistants and conversational agents. It's almost a reflex to summon Alexa or Siri to answer a question or chat on the phone with an automated voice trying to help fix the Wi-Fi.

Yet, how do we know these systems are actually out to help us? How do we evaluate the trustworthiness of a bot? Those concerned about bot security are frequently raising these kinds of questions. The answer, according to some organizations, may lie in more AI openness.

With AI systems becoming more and more advanced, the exact identity or trustworthiness of a system, or even a chatbot, has become increasingly difficult to discern. Questions that on the surface appear relatively straightforward, such as the motives of a system, the types of data it collects, and its uses, or even who the creator is, can be far more muddled and confusing than an average consumer might think.

A bot's nutritional info

Those are some of the reasons why IBM is working to create fact sheets for AI services to provide consumers with basic information about various AI systems.

"This is extremely important. This is extremely prevalent," said Costas Bekas, manager of the Foundations of Cognitive Computing group at IBM Research -- Zurich, referring to bot integrity.

Costas Bekas, IBMCostas Bekas

The goal of the AI fact sheets, Bekas explained, is to provide consumers with something that is similar to a nutrition label on foods -- a label that covers the basic information about a product and lays out important safety information for users.

Information on a fact sheet might include information on the system trainer, the data set used in training, the testing methodology, bias policies that were checked, and the kinds of governance employed to track data being used by the AI. The fact sheets keep consumers more informed and can perhaps create more effective AI and bot security, especially in a world in which bots can pose a significant security threat.

Global transparency

IBM isn't the first organization to suggest increasing AI explainability. Part of the mission of the European Union's new GDPR regulation, which went into effect earlier this year, was to increase transparency and enable consumers to make better-informed decisions. Over the years, various AI scientists and analysts have called for more AI accountability and better AI and bot security.

This is something we will not let get out of control.
Costas BekasIBM

Experts, too, have pointed out the potential benefits of marrying AI systems with blockchain technology to boost visibility into AI processes. It's a union that at least one organization is working on now.

BotChain, a Boston-based startup spun out of bot-maker Talla Inc., hopes to tighten bot security by creating a system to identify and register bots.

There are plenty of bots that tell you they are working on your behalf, said Rob May, CEO of BotChain and Talla. "Well, how do you know they're working on your behalf?"

Bots and blocks

Essentially, BotChain is aiming to create a decentralized ecosystem for chatbots and bot developers on the blockchain that can be tapped by consumers and used as a sort of marketplace. The key is that bots in the ecosystem will be registered, certified and receive continuous verification all in the name of bot security and AI transparency.

For customers to interact with a bot, the bot needs to be authenticated, May said, in a way that's conceptually similar to single sign-on certificates.

Rob May, BotChainRob May

The need to link AI to blockchain was demonstrated during a Google Duplex voice assistant demo at a Google event earlier this year during which the organization's human-sounding AI booked an appointment over the phone in real time.

May said that after the demo, he received a number of texts and emails from people concerned about fraudulent bot activity because of the potential for bots to impersonate humans.

According to May, bot scams were once a thing of the future. Today, however, "I think we're already at the point where they are a reality."

For Bekas at IBM, questions of bot security and AI transparency seem to transcend business competition.

"We are always striving for the ethical, just use of artificial intelligence," he said. "This is something we will not let get out of control."

Dig Deeper on AI technologies