What Is Edge Computing?

Vikram M.
featurepreneur
Published in
3 min readOct 8, 2022

Edge computing is a distributed IT architecture that moves computing resources from clouds and data centers close to the originating source. The main goal of edge computing is to reduce latency requirements while processing data and saving network costs.

Before that, do you remember the first ever huge and bulky sets of computers?

As devices grew smaller over the years, their computing and processing powers have grown exponentially. While data warehouses and server farms were once considered to be the ultimate choice for computing speed, the focus has quickly shifted to the concept of cloud or “offsite storage”. Companies like Netflix, Spotify, and other SaaS (Software as a service) companies have even built their entire business models on the concept of cloud computing. However, cloud computing comes with a number of drawbacks. The biggest problem of cloud computing is latency because of the distance between users and the data centers that host the cloud services. This has led to the development of a new technology called edge computing moves to compute closer to end users.

Hence to reduce the latency Edge Computing is used.

But how does it work?

Edge computing works by capturing and processing the information as close to the source of the data or desired event as possible. It relies on sensors, computing devices, and machinery to collect data and feed it to edge servers or the cloud. Depending on the desired task and outcome, this data might feed analytics and machine learning systems, deliver automation capabilities or offer visibility into the current state of a device, system, or product.

Benefits of Edge Computing

Edge computing has emerged as one of the most effective solutions to network problems associated with moving huge volumes of data generated in today’s world. Here are some of the most important benefits of edge computing:

1. Eliminates Latency
Latency refers to the time required to transfer data between two points on a network. Large physical distances between these two points coupled with network congestion can cause delays. As edge computing brings the points closer to each other, latency issues are virtually nonexistent.

2. Saves Bandwidth
Bandwidth refers to the rate at which data is transferred on a network. As all networks have limited bandwidth, the volume of data that can be transferred and the number of devices that can process this is limited as well. By deploying the data servers at the points where data is generated, edge computing allows many devices to operate over a much smaller and more efficient bandwidth.

3. Reduces Congestion
Although the Internet has evolved over the years, the volume of data being produced every day across billions of devices can cause high levels of congestion. In edge computing, there is local storage and local servers can perform essential edge analytics in the event of a network outage.

Drawbacks of Edge Computing

Although edge computing offers a number of benefits, it is still a fairly new technology and far from being foolproof. Here are some of the most significant drawbacks of edge computing:

1. Implementation Costs
The costs of implementing an edge infrastructure in an organization can be both complex and expensive. It requires a clear scope and purpose before deployment as well as additional equipment and resources to function.

2. Incomplete Data
Edge computing can only process partial sets of information which should be clearly defined during implementation. Due to this, companies may end up losing valuable data and information.

3. Security
Since edge computing is a distributed system, ensuring adequate security can be challenging. There are risks involved in processing data outside the edge of the network. The addition of new IoT devices can also increase the opportunity for attackers to infiltrate the device.

--

--