Try Now
miraworks

What is edge computing and why does it matter?

19 October, 2021

miraworks

The explosive growth of internet-connected devices—the IoT—along with new applications that require real-time computing power, continues to drive edge-computing systems.

While early goals of edge computing were to address the costs of bandwidth for data traveling long distances because of the growth of IoT-generated data, the rise of real-time applications that need processing at the edge is driving the technology forward.

For those who may be new to the topic, edge computing is a part of a distributed computing topology in which information processing is located close to the edge—where things and people produce or consume that information.


There are as many different edge use cases as there are users – everyone’s arrangement will be different – but several industries have had an early lead in edge computing. Manufacturers and heavy industry use edge hardware as an enabler for delay-intolerant applications, keeping the processing power for things like automated coordination of heavy machinery on a factory floor close to where it’s needed. The edge also provides a way for those companies to integrate IoT applications like predictive maintenance close to the machines.
Other use cases present different challenges entirely. Retailers can use edge nodes as an in-store clearinghouse for a host of different functionalities, tying point-of-sale data together with targeted promotions, tracking foot traffic and more for a unified store management application.

The physical architecture of the edge can be complicated, but the basic idea is that client devices connect to a nearby edge module for more responsive processing and smoother operations.

Benefits

The biggest benefit of edge computing is the ability to process and store data faster, making for more efficient real-time applications that are critical to companies. Before edge computing, a smartphone scanning a person’s face for facial recognition would need to run the facial recognition algorithm through a cloud-based service, which would significantly slow processing time. With an edge computing model, the algorithm could run locally on an edge server or gateway, or even on the smartphone itself given the increasing power of smartphones. In addition, cost savings can be a driver to deploy edge-computing.

However, from a security standpoint, data at the edge can be troublesome, especially when it’s being handled by different devices that might not be as secure as centralized or cloud-based systems. Furthermore, differing device requirements for processing power, electricity and network connectivity can have an impact on the reliability of an edge device.


It is also should be mentioned that 5G Wireless carriers have begun rolling out licensed edge services for an even less hands-on option than managed hardware. The idea here is to have edge nodes live virtually, using 5G’s network slicing feature to carve out some spectrum for instant, no-installation-required connectivity.

It’s clear that while the initial goal for edge computing was to reduce bandwidth costs for IoT devices over long distances, the growth of real-time applications that require local processing and storage capabilities will continue to drive the technology forward over the coming years.