Edge Computing — Why Scaling Matters?

Alireza Ghods
NATIX
Published in
3 min readFeb 4, 2020
NVIDIA Jetson Edge Modules (photo: NVIDIA)

In my previous post, I covered Cloud limitations which led to the emergence of Edge computing solutions.

Although performing the majority of the computation tasks at the edge of the network can bring a lot of benefits, it also comes with its own limitations. As an example, Edge devices are typically equipped with limited computing capacity, incurring higher computing latency.

To address the computational limitation of the Edge devices, current systems are using various scaling approaches.

Vertical scaling a.k.a Cloud Edge architecture

Vertical scaling architecture is the most commonly used approach (also seen in commercial solutions), connecting Edge devices to Cloud infrastructure. This approach is also known as Cloud Edge architecture. In vertical scaling architectures, small resource pools are located at strategically selected edge locations (e.g. Cloudlets and Fogs). Here, the challenge for Edge computing is to determine the ideal trade-off between computing latency and transmission latency. This necessitates the development of an optimal task offloading scheme, to determine whether a data processing task should be performed locally (on Edge device), be offloaded to the Fog/Cloudlet servers, or further offloaded to the remote Cloud servers.

Although beneficial, vertical scaling introduces high Cloud dependency and in fact, defeats the purpose of Edge computing. To give you a better overview, let us have a closer look at how this architecture will look for computer vision applications. Equipping a camera with GPU or FPGA powered Edge devices can enable them to run deep learning algorithms for image processing in real-time. However, to circumvent the problem of computational limitation of Edge devices, the few emerging solutions (e.g. NVIDIA or Azure IoT Edge) that exist, collect the video data from the cameras, send it to the cloud, train various event deep learning models there and ship the models to relevant Edge devices for execution. Apart from the fact that sending this data through internet pipelines introduces unnecessary bandwidth costs, it is a waste of energy, it introduces unwanted latency and endangers security and privacy of one’s sensitive data.

Horizontal scaling across the Edge network

A rather new alternative approach to vertical scaling is horizontal scaling which refers to the utilization of resources across devices on the Edge/Fog level by pooling them upon demand to create computing systems of different capabilities at a very low cost. To the best of our knowledge, there exist currently no mature solution using horizontal scaling approaches. Although new, such architecture can provide new benefits such as

a) Lower transmission latency, as more of the computation task is performed in close proximity to the edge level at all times.

b) Extreme ad hoc infrastructure, as it enables computation infrastructure creation on-demand with minimal pre-existing infrastructure.

c) Auto-scalability, as the amount of pooled resources, is proportional to the number of participants.

d) Lower cost due to better utilization of otherwise unused computing resources and removal of Edge-Cloud communication costs.

To this end, Opportunistic Edge Computing (OEC) is one example of such a scheme introduced mid-2018. The computing resources used by this framework are owned in a distributed manner and leased to the network through short-term contracts.

Horizontal scaling approaches such as OEC can, in fact, have many benefits for IoT, however, there are still ongoing challenges that need to be addressed before such a framework is ready large scale adoption.

In my next article, I will try to address OEC in greater depth and look into the challenges that exist for the wide adoption of such a framework.

DISCLAIMER: This post just reflects the author’s personal opinion, not any other organization. This is not official advice. The author is not responsible for any decisions that readers choose to make.

--

--