AI toys: The new insider threat
Once upon a time, back in a simpler day, the biggest threat kids’ toys was the risk of slipping on a plastic fire engine. Boy, have things changed!
The teddy bear threat
Today’s hottest gifts are pretty much all smart, connected devices, like Cozmo, Zoomer Kitty or Smart Toy Bear. And while they may seem cute, entertaining and sometimes furry, they are actually about as innocent as a hacker sitting in a lab in North Korea.
First, let’s start by understanding the baseline attributes of today’s coolest, connected toys.
- They are connected to the internet, just like any other IoT device. That’s an inherent vulnerability.
- They are controlled remotely through a protocol like Wi-Fi or Bluetooth.
- They are eager listeners, capable of hearing conversations, recording them and then transmitting them to the cloud where they are stored. Each step represents a vulnerability.
- They can take photos and shoot videos, which is even more at-risk content.
- They are location-aware, including GPS functionality.
You don’t have to be a cybersecurity expert to recognize that these characteristics present a dream scenario for attackers. In fact, we have already seen the potential of cyber-carnage. In 2015, it was revealed that a hacker broke into the servers of the Chinese toymaker VTech and stole personal information of nearly five million parents and more than 200,000 children.
The stolen data included home addresses, birth dates, email addresses and passwords. They even got into photographs and chat logs of parents conversing with their children. Just think what an invasion of privacy that represents! Scary.
Is it hard to do this? Consider that just last year, a precocious 11-year-old boy stunned an audience of security experts by hacking into Bluetooth devices in order to manipulate a teddy bear and show how interconnected smart toys can be weaponized.
What can bad actors do once they hack into a smart toy? They can take over the doll, robot or game, and turn it on kids and parents for the sheer, evil fun of it. That includes taking over the voice. Imagine how frightening that would be.
Now, let’s go one step further and imagine what would happen if a smart listening device was breached by someone seeking to sell that information to someone in the midst of an ugly divorce? Or someone who could extort money because the contents of the hacked recorded conversation is embarrassing or illegal. The potential for harm is almost limitless.
And all this is before we even start talking about AI!
AI extends the risk envelope
A smart toy that incorporates AI is not just cool for the kids, it can also be the coolest thing for attackers. In fact, the sexier the toy, the more attackers would want to get their hands on it.
Here are some of additional capabilities AI brings to the table:
- AI can understand the intent behind the text in a message. Today’s natural language processing algorithms can automatically classify text messages according to their meaning, and this categorization and sentiment analysis can extract useful information and knowledge hidden in the text. More fodder for malicious harm.
- AI uses advanced image recognition. It can search and identify for specific image patterns, recognizing faces, expressions and objects. This means an attacker could recognize a child and then find that child in tagged Facebook photos, for example.
- AI has voice recognition skills. This means it can separate voices and the subjects they are discussing. More room for smart attackers to make trouble and create ransomware scenarios.
- AI is capable of making adaptive correlations. That means it can correlate voices, discussion subjects and photos (faces and identities) with other inputs such as their locations. By doing so, it can predict where people are going to be and what they plan to do there.
Today’s AI algorithms support all of the above capabilities and more. This brings me to the main point: smart toys can be used for espionage — and this can be done from the inside.
Let me be clear. AI-enabled toys pose risks for anyone in the household. Anyone who holds some sort of confidential information — from business plans to national secrets — is extremely vulnerable. When a document is physically visible — anywhere near a smart toy — the document owner exposed. That’s a second risk.
Deal with it; with AI-enabled toys, there is zero security in your own home. The famous phrase “the privacy of your own home” becomes an oxymoron.
What does the future hold for IoT?
So, how do we even begin to prepare ourselves for a world in which toys can turn on us? Essentially, we need to think of smart toys as IoT devices which require a minimum level of security around their interfaces.
I believe that what any IoT device — toys included — stores, transmits and records should be controlled by the user (in the case of toys, this case the user’s parent). The challenge, however, is that there is an abundance of IoT devices throughout homes, each with different security controls and different level of security expertise/knowledge. In the future, I predict we will see “security broker” devices emerge that will make all home IoT signals go through a few managed security centers that can scrub data and decide what to block or to forward.
Until then, my advice is as follows:
- Understand the capabilities of your AI toys and AI devices in general, and use this article to assess the risks accordingly.
- Don’t trust toys that store information in the cloud; avoid buying these because you can never know how well the toy vendor secures it. Manufacturers are always looking to increase margins and cut corners — this is an easy way to do that.
- Make sure you limit the Wi-Fi access of these toys and use them only when needed (for example, most of the AI capabilities I listed will not work without an internet connection). Your mantra should be: When the kid isn’t playing, Wi-Fi shouldn’t be playing either.
- Try to utilize Bluetooth control instead of traditional Wi-Fi; its shorter range imposes less of a risk.
- Ensure your home Wi-Fi is encrypted.
- And to stay on the safe side, don’t let these toys find their way to places they don’t belong.
In short, what happens in Vegas may stay in Vegas. But with smart toys, the same cannot be said for your family room.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.