What is Edge Intelligence? (Pt.2) — A deep dive

Zanita Rahimi
DataReply
Published in
8 min readNov 29, 2022

Article co-author: Antonio Di Turi

Intro

In the previous article, we talked about Edge Intelligence, presented the different architectures that lead to the state of the art of Edge Computing, and explained use cases from real-life scenarios.

In this follow-up article, we aim at delivering a fuller picture of Edge Intelligence, via a deeper technical dive into the following topics:

  • Advantages of Edge Computing
  • Challenges and Limitations
  • Costs Aspects

Advantages of Edge Computing

The advantages of Edge Computing are achieved mainly by the proximity of the sensors to the computing element of the architecture. Being able to compute closer to the Edge is an advantage from various points of view such as real-time decision-making, lower latency, reduction in costs and energy demand, and scalability.

In the following sections, we are going to address each of these points separately.

Advantages of Edge Computing (Icons from Flaticon)

Accelerated decision-making

The primary goal of Edge Computing is the ability to reduce the time required to transfer large amounts of data. Analyzing data on Edge makes it possible to generate faster feedback thus accelerating the decision-making process.

Low latency

On Edge, all processing and storage are done near where data is generated. This reduces the delay in the communication process and eliminates the need to transfer data from endpoints on the cloud and back again. These time-savings can often be measured in milliseconds.

High Energy Efficiency

Having in mind global warming as a concern for a couple of decades, and to no surprise, the Oxford Word of the Year 2019 was climate emergency, green/sustainable solutions are becoming more and more popular and necessary. Tech experts are concerned about the increasing energy consumption caused by the exponential growth of technology, and solutions that address this problem are preferable.

Edge Computing can be considered a green technology since it aims at lowering the devices’ energy demand.

How?

  • By reducing the total amount of data traversing the network
  • Offloading computation tasks to edge servers or the cloud leads also prolongs the battery life of end devices.

As an additional example (also mentioned in the previous article), we can consider the smart grid.

One of the most famous applications of Smart Grid is its goal to analyze energy consumption to prevent or limit energy use.

Smart Grid Applications

Let’s draw a comparison of an Edge Data Center with a Cloud Data Center.

A Cloud Data Center requires energy for powering and cooling the servers (which accounts for 40% of total energy consumption), and an Edge Data Center with the relatively same output and size may need less energy for keeping the processing hardware temperature low (particularly relevant in cooler climates).

Reduction in operational costs

Storage costs have dropped significantly in the past decade, but the cost of moving data around might continue to climb as the volume of data spikes. By minimizing the quantity of data transferred to the cloud, Edge Computing can help keep expenses under control.

Scalability

Organizations can add edge devices as they expand their uses so that they are deploying and managing only what they need. Additionally, endpoint hardware and edge devices often cost less than adding more computing resources within a centralized data center, thereby making it more efficient for organizations to scale at the edge.

Challenges and limitations of Edge Computing

It is the sad truth, but there is no one-fit-for-all solution, and one must always trade-off between the pros and cons of a solution.

As consultants, we have seen this happening in real projects, where business needs, particular use cases, and additional last-minute requirements can change the way we project the final solution’s architecture.

In the following sub-sections, some crucial questions are presented and answered:

  • Which systems/applications are suitable for Edge?
  • Which systems/applications are not suitable for Edge?
  • What are the Hardware Requirements?
  • What are the hidden challenges of Edge Computing?

Which systems/applications are suitable for Edge?

Real-time data processing, solutions in remote locations with limited/no internet connectivity, large datasets that are too costly to send to the cloud, highly sensitive data, and strict data laws; are all use cases that can be considered for Edge.

Furthermore, workloads like predictive maintenance, safety alerts, or autonomous vehicles; are use cases that would not be possible in a Cloud Environment due to the latency requirements, hence Edge would be the way to go. So, due to latency requirements, through Cloud Computing, AI is not possible for every person in any place.

Which systems/applications are not suitable for Edge?

Applications such as non-time-sensitive data processing, and dynamic workloads, are not meant for Edge. Other examples include

  • Conventional application — there is no need for milliseconds latency, moving it to Edge would not be cost-efficient.
  • Data storage — processing and long-time storage on Edge would need a specialized infrastructure, which is not practical.
  • Voice Assistants — current end devices are still insufficient to support models such as voice assistants, e.g., Siri and Cortana, which are based on cloud computing and cannot operate without a network.

What are the Hardware Requirements?

In addition to optimizing algorithms to meet hardware limits, researchers are focusing on shaping the hardware to meet demands such as

  1. Rugged — the hardware should be able to tolerate shocks, vibrations, extreme temperatures, and dust.
  2. Cableless — preventing vibration damage, reducing the risk of a loose connection since there are fewer moving parts.
  3. Connectivity options — if Wi-Fi is not available, make use of cellular 4G, LTE, and 5G connectivity via SIM module sockets.
  4. Storage Optimized — by using SSD silicon chips instead of HDD spinning disks, as they allow faster data transfer and storage.
  5. Efficient power usage:
    a. Cooling Systems — required given the power consumption, and the associated heating of the edge servers as they analyze the enormous data volumes.
    b. Analog Signals — analog-to-digital converters consume vast amounts of energy and replacing them with analog signals can improve.
  6. Security — achieved through integrated cryptographic keys.

What are the hidden challenges?

Hidden challenges of Edge Computing

Resource scheduling

In a decentralized environment, a robust mechanism for detecting the appropriate nodes for specific workloads is essential. This cannot be stressed enough, since at a given time there might be a huge amount of edge devices available, and the intended purpose of these devices may vary greatly, leading to heterogeneity.

Therefore, robust benchmarking models need to be developed that make visible to the user the availability and capabilities of the resources and proactively deal with failures in the nodes (and fix them automatically).

Data transfer and knowledge sharing

Data transfer can be a problem in certain cases. For example:

  • Statically trained models cannot handle data and tasks in unfamiliar environments
  • Models trained in a decentralized learning approach use only local experience

And this leads to not having very satisfying predictions.

Such concerns can be tackled by making use of knowledge sharing between different edge servers.

The server can send knowledge queries, and, with the required knowledge, it can respond to and perform the task. If there is insufficient knowledge accumulation new knowledge primitives must be entered manually.

Availability and security

The availability of resources relies on server and connection capacities for ensuring constant service delivery. If nodes need to be publicly accessible, then security precautions must be taken. A router used to manage internet traffic cannot be compromised when used as an edge node. Also, the technology used on the edge node needs to have appropriate security features to be deemed safe.

Heterogeneity

Edge computing must provide scalability for different platforms with different numbers of users. Since edge devices use different access technologies (3G, 4G, 5G, Wi-Fi, and Wi-Max), the challenges of interfacing different technologies should be considered when planning Edge architectures.

Imagine a device with multiple sensors such as a GPS, light sensors, and touch screen, its collected data are heterogeneous and come in different formats. It is very important to integrate and use them as input to machine learning algorithms. One possible solution is to build a multimodal machine-learning model that can use different forms of data as input.

Costs of Edge Computing

Finally, let’s analyze costs. Technical discussion can always be fascinating where we tend to make use of the most popular/interesting technologies, but in the end, there will always be a specific budget that needs to be respected, and often lower costs solutions are a priority.

In this final section, we are going to analyze the different cost aspects of Edge Intelligence and we are going to make a comparison with an alternative solution: Cloud Intelligence.

Edge Intelligence cost aspects

Key factors in calculating the overall cost of an Edge Computing solution would be the separate costs for:

  • Infrastructure (sensors, computing devices, network connectivity)
  • Application (own solution, pre-built app, custom app)
  • Management (managing and orchestrating resources/services).

There are varying pricing models for accessing Edge Computing, which can be either cloud-like pricing strategies (infrastructure-as-a-service, platform-as-a-service, software-as-a-service, etc.), or distributed edge/cloud pricing models (computation-based, licensing-based, etc.).

It always depends on the use case in question and the tradeoffs one is willing to make regarding the cost/performance of the solution. For example, the following techniques improve performance issues, but can incur additional costs:

  • Mirroring — caching data at the mirror which reduces delay, but increases the cloud storage cost.
  • Parallel execution — improves execution time, but increases power consumption and hardware costs.
  • Pre-installations — or the process of providing necessary applications, system boot-up scripts, and essential software packages, help improve execution initialization delay and runtime data transmission but increase resource consumption and maintenance overhead.

Edge Intelligence vs Cloud Intelligence costs

Edge Intelligence is an emerging field, and the comparisons with Cloud solutions are very interesting from both the performance and cost perspectives.

Edge Intelligence Costs vs Cloud Intelligence Costs

Cloud computing offers reduced server hardware costs (reduced from maintaining/managing hardware). Conversely, connectivity, data movement, bandwidth, and latency costs are quite high.

On the other hand, Edge Computing offers reduced WAN costs since it categorizes data to store locally or send to the cloud. It doesn’t eliminate the need for the cloud, instead optimizes the flow of data and therefore reduces operating costs. Furthermore, it offers lower latency and bandwidth requirements, increased performance, and lower operational expenses.

It doesn’t have to be either Edge Intelligence or Cloud Intelligence; we can have hybrid solutions and use the benefits of both worlds.

Based on the calculations done by Wikibon for a wind farm solution with security cameras and other sensors, they found out that with a hybrid approach they could decrease the costs for managing and processing data by up to 36%.

Conclusion

This was a deep-dive article about Edge Intelligence. With this mini-series, we strived at delivering an end-to-end perspective on this exciting emerging new tech trend.

Feel free to contact us for questions, suggestions, or collaboration possibilities.

Your input may become our next article.

Stay tuned!

--

--