IoT edge computing: Let's get edgy
This is the third of a five-part blog series. Read part two here.
Continuing my series on digital transformation, I now want to turn my attention to the network edge. You know, when you look at volume projections for connected devices versus how most people are doing things today, there’s a pretty major disconnect.
Plethora of online devices
A few years back, we hit a point where there were more devices online than people. In fact, it’s estimated that in 2019, we’ll cross over to more “things” online than traditional end-user devices. By “things,” I mean devices generally not associated with an end user and “headless” in operation. Estimates vary widely on the total number of things over time but in any event, it’s starts-with-a-B big. Clearly, these things represent new actors on the internet and a huge catalyst for digital transformation.
Tick tock
Looking back at the history of computing, the pendulum inevitably swings every 10 to 15 years between centralized and distributed models. Given the sheer volume of networked devices going forward, it’s inevitable that we need distributed architectures because it’s simply not feasible to send all data directly to the cloud. In fact, I think distributed is here to stay for quite some time, if not from here on out.
When in doubt, talk cat videos
Confession time — my wife and I have three cats. Sad but true — we bought phones with higher capacity storage purely to have space for all our cat pictures and videos and send them back and forth. But, who doesn’t like a good cat video? A colleague of mine, Greg, came up with this cat video analogy to describe how things are different with IoT workloads.
When you upload a cat video to a video-sharing service, their servers stream that video down to other people. This content may need to be cached on multiple servers to support demand, and if it goes viral with millions of people wanting to access it, it may then be moved to servers at the provider’s cloud edge — as close to end users as possible to minimize latency and bandwidth consumption on their network. In a nutshell, this is the concept of multi-access edge computing.
However, with IoT you now have millions of devices that might want to hit the same server, all wanting to stream data. This turns the whole paradigm upside down and new architectures are needed to buffer that data so only meaningful traffic is going across networks. We need better ways to support all those connected cat collars out there.
Gateways aren’t just for turning A into B
So, enter the concept of the edge gateway. Gateways aren’t just about enabling connectivity, protocol normalization (A to B) and data aggregation — they also serve the important functions of buffering and filtering data as well as applying security measures. In fact, Gartner estimates that 90% traffic will go through edge gateways.
There are key technical reasons for increased edge computing:
- Latency — I don’t care how fast and reliable your network is, you just don’t deploy something like a car airbag from the cloud;
- Bandwidth — There’s an inherent cost associated when moving data, which is especially bad when transporting over cellular, and even worse via satellite.
- Security — Many legacy systems were never designed to be connected to broader networks, let alone the internet. Edge nodes like gateways can perform functions such as root of trust, identity, encryption, segmentation and threat analytics for these as well as constrained devices that don’t have the horsepower to protect themselves. It’s important to be as close to the data source as possible so any issues are remedied before they proliferate and wreak havoc on broader networks.
The kicker — the total lifecycle cost of data
However, beyond those technical reasons there’s also the consideration of the total lifecycle cost of data. People starting with Pi and Sky often realize that chatty IoT devices hitting public cloud APIs can get super expensive. And on top of that, you then must pay to get your own data back.
A few years back, there was a study by Wikibon about a simulated wind farm 200 miles away from a cloud, promoting a balanced edge and cloud approach. Results show that it reduced total operating cost by over 60%, assuming a 95% reduction in traffic passed over the network due to the utilization edge processing.
Many edges
Of course, it’s not just about edge gateways. There are many different edges:
- To a telco, the “edge” is the bottom of their cell towers. You might have noticed that MEC originally stood for mobile edge computing but was recently changed to multi-access edge computing, and for good reason because it’s not just about mobile devices. 5G has a key tie in here and of course I like MEC because it helps serve up my cat videos faster!
- To an ISP or content delivery network provider, the edge is their data centers on key internet hubs — they might call this the “cloud edge.”
- Of course, there are also traditional on-premises data centers, plus we’re seeing an ever-increasing rise of micro-modular data centers to get more server class compute closer to the producers and consumers of data. Then comes more localized servers including hyper-converged infrastructure and edge gateways sitting immediately upstream of sensors and control systems. All different types of edges.
- And to an OT professional, the edge means the controllers and field devices (e.g., sensors and actuators) that gather data from the physical word and run their processes.
In effect, the location of edge computing is based on context, with the net being moving compute as close as both necessary and feasible to the users and devices needing it.
The fog versus the cloud
And to ensure that my buzzword count is solid in this post, there’s the term fog, which is really all the edges combined with the networks in between — effectively everything but the cloud, or “the things to cloud continuum.”
The bottom line is that regardless of how we label things, we need scalable systems for these distributed computing resources to work together along with public, private and hybrid clouds. There are also no hard and fast rules as to who owns which environment between OT, IT or otherwise — we just need to make sure technologies meet needs in all cases.
Watch out for my next blog, which talks about ways to get past AOL stage to advanced-class IoT. In the meantime, I’d love to hear your comments and questions.
Keep in touch. Follow me on Twitter @defshepherd and @DellEMCOEM, and join our LinkedIn OEM & IoT Solutions Showcase page here.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.