How do AI algorithms automate IoT threat detection?

IoT threat detection is about to get easier, thanks to the automating abilities of AI algorithms. But, as IEEE member Kayne McGladrey explains, it doesn't mean humans are out of the picture.

One major advantage of AI algorithms is their ability to rapidly find patterns across large data sets and to detect anomalies. At the simplest form, this involves creating a baseline model of what's normal in an environment, and then flagging and investigating anomalies to that baseline, which could potentially be IoT threats. This can be seen in various environments, from ones as small as an individual residence up to the largest businesses.

For example, I have an increasing number of IoT devices in my home. My lightbulbs should only be able to talk to the lightbulb controller, and the controller should then talk to the app I use to control the lights. Similarly, my speakers should only talk to the central controller unit for the speakers, which can then talk to my apps. The doorbell should only talk to its associated app. This is a baseline of normal.

Imagine my surprise, then, when I set up an app to flash the lights when the doorbell rang, and later when I tried an app to set the appropriate lighting based on the music being played. The AI unit I installed at my home correctly flagged these behaviors for review, as they were outside of the normally observed behavior in my IoT environment.

This personal example of IoT threat detection scales to both user and machine cohorts. Based on observable data, an AI algorithm can find out which operators should be able to configure IoT devices, what times they usually do configuration, where that configuration request originates and so on. Similarly, an AI algorithm can determine typical communication paths from IoT devices to controllers, as well as between IoT devices. As a result, threats are the unusual behaviors outside of what the AI system has seen before.

Note that this supposes a certain degree of human interaction with the AI to make judgment calls about whether an unusual behavior is appropriate. My home AI doesn't have the authority to tell me that my lights shouldn't talk to my speakers. Instead, it needs my approval, given a default deny policy. This is a good thing, as I'm a compensating control against black swan events or an IoT threat actor training my AI on bad data.

The same sort of authority must be present in any enterprise setting to ensure proper IoT threat detection. Be sure to appoint the right person or people who can make judgment calls about potential unusual behaviors and put new rules in place for the AI algorithm to learn from future events. The security of your IoT system will depend on it.

Have a question for one of our experts? Submit it now. All questions are anonymous.

Dig Deeper on Internet of things security