Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Clouds can't save our oceans, we need an edge

Edge computing is rapidly being recognized as a necessity for implementing several industrial IoT use cases. Take offshore oil platforms, for example. It is enormously valuable to be able to predict when critical components are likely to fail. When failures are predicted, repairs can be made before human safety and the environment are put in jeopardy.

To do failure prediction, companies first feed historical data from components and sensors into a machine learning algorithm, with markings about whether the component was healthy or failing. The machine learning algorithm uses this data to create a “predictive model,” which is an application that can be fed real-time data and make judgments on whether the component being monitored is healthy or likely to fail.

While the machine learning techniques to build and deploy predictive models have been around for years, in practice network connectivity is typically what stands in the way. Building and operating predictive models for industrial equipment is data intensive, requiring collection of data points from a multitude of components and sensors (an oil platform may have thousands) at least once per second. Offshore oil platforms at best have cellular or satellite connections, which means bandwidth is scarce and downtime is frequent, making it impossible to send all sensor data at full fidelity to a central site or cloud.

Because of this, a prominent design pattern has emerged called “learn globally, act locally.” In this scenario, it means building predictive models at a core location (like the cloud) where compute power and data are plentiful, and deploying those models to data systems that reside at the edge (like the oil platform). Once deployed, these data systems collect data from all local controllers and sensors at full fidelity, evaluate that data against the local predictive models in order to detect possible impending failures, and take action without waiting on any data to be sent or received from the core.

In this design, it’s still important to have a network connection between the edge and the core, but it isn’t critical for the connection to be active all the time, since it’s out of the critical path of detecting and acting on possible issues. Instead, it’s more of a convenience channel to communicate filtered, summary and critical data from the edge to the core for the purposes of tuning the predictive models over time, and sending new models in the reverse direction.

It isn’t hard to imagine other ways artificial intelligence and edge computing help save lives and the environment, from autonomous cars to smarter air traffic control systems. The question is no longer how, but when?

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.