Pushing computing to the “edge”

Pratyush Choudhury
7 min readJul 31, 2019

--

Technology is already making small businesses more productive, the large companies competitive, the public sector (comprising of education and healthcare) more efficient. The total % of GDP that tech spend would be around 10% by 2030 (estimated to be around $14 trillion). However, the remainder of the 90% is going to be digital-first and driven by software. This will be fueled by the intelligent cloud, intelligent edge era.

You can find my piece on cloud computing here. In one sentence cloud computing is provisioning of IT resources (compute, networking, databases, security etc.) via the internet.

Computing is getting embedded in the world — it’s ubiquitous, it’s distributed (or at least that’s what they taught in CSO — 332 :P). Artificial intelligence is penetrating into every experience in a deep way. The digital experience isn’t bound to one device either. It’s quickly becoming multi-sense and multi-device.

The hiring of software engineers is growing 11% faster in the non-tech industries as compared to the tech industries. The auto industry would be a classical example — the industry is absorbing software engineers at a pace which is 3 times the pace of mechanical engineers. An average car from Ford Motors has 150 million lines of code. However, the amusing thing (in a pleasant manner of course) is that we have a billion Windows users, roughly 2 billion smartphones but reputable sources state that there would be 50 billion connected devices by 2030. And of course, all of it would be connected to the cloud — which will become the world’s computer.

Let’s look at what might it all possible.

The cloud is definitely the biggest technology trend in the day and age but the cloud isn’t the final destination. The magic of a connected world would only unravel itself the moment we take the cloud to the edge with consistency. For example, Starbucks uses the edge to measure the water quality and the quality of the espresso in every espresso machine enabling them to offer personalized coffee making at scale. I can go on and on about the different types of use cases but I hope to help you understand the concept of edge computing a bit better.

What is edge computing?

Edge computing is the ideology of bringing computing as close to the source of the data as possible. This reduces the latency and bandwidth used. If I were to put it in simpler words, edge computing means reducing the workloads on the cloud. You shift these computing workflows to local devices such as the smartphones of a user (some people are also calling it federated learning) or an IoT device or an edge-server. Once you insert compute capabilities in the edge, the long-distance communication between clients and servers reduces and the process becomes much more efficient.

Image source

What is the network edge?

For devices connected by the internet, the network edge is the point using which the device, or the local network containing the device, interact with the internet. The edge is a bit of fuzzy term though — for example, a user’s PC or the processor present inside an IoT camera can be considered the network edge, but the user’s router, ISP, or local edge server are also considered the edge. The important takeaway is that the edge of the network is geographically close to the device, unlike the cloud servers, which can be very located far from the devices they communicate/interact with.

An example of edge computing

Let’s consider the case of a smart city which has dozens of cameras installed. All of these cameras would be high-definition IoT cameras meant for a variety of purposes like facial recognition, traffic monitoring, etc. However, these cameras are dumb and would send all of the video signals to the cloud servers as streaming data. The processing of these videos takes place on the cloud where they pass it through a service/application which detects motion to redact the clips where there are no activities and save only the clips which contain some sort of activity to the database (though storage is cheap, why pay for it?).

This puts a significant amount of stress on the internet infrastructure of the city because streaming video data to the cloud requires a lot of bandwidth. Furthermore, processing all these streams on the cloud would require intensive computing as well.

Source

What if the computation which detects the motion happens on the network edge instead of the cloud? If each of the independent nodes were able to perform on their own premises and send the data to the cloud server as and when needed, it would result in a significant reduction in the bandwidth used. This would also enable the cloud to communicate with a larger number of cameras without requiring auto-scaling and at the same cost. This is the most elementary form of introduction to edge computing.

What are the benefits of edge computing?

As illustrated in the aforementioned example, moving compute to the edge helps minimize bandwidth use and ensure optimized usage of server resources. Bandwidth and cloud resources are finite and cost money. With every household and office becoming equipped with smart cameras, printers, thermostats, and even toasters, Statista predicts that by 2025 there will be over 75 billion IoT devices installed worldwide. In order to support all those devices, significant amounts of computation will have to be moved to the edge. This will help the wider adoption of technology instead of scaling.

Another significant yet less pronounced benefit of moving processes to the edge is reduced latency. Every communication with a distant server somewhere creates a delay/lag — imagine two coworkers in the same office chatting over an IM (instant messaging) platform might experience a sizable delay as each message has to be routed out of the building to communicate with a cloud server situated in a region somewhere in the globe, and be brought back to appear on the recipient’s screen. If that process is moved to the edge, and the company’s internal router/servers are in charge of facilitating intra-office chats, the lag would be reduced to a great extent.

Source

Similarly, when users of a myriad web applications run into processes which require communications with an external server, delays/lags would be inevitable. The duration and nature of these lags will be varied though and will depend upon the available bandwidth and where the server is situated, but if we can bring more processes to the edge, these lags can be shredded altogether.

Moreover, edge computing serves the foundation for new functionalities that weren’t previously plausible. For example, a company can use edge computing to process and analyze their data at the edge, which makes it possible to do so in real-time. This will be the genesis of the Industrial Revolution 4.0 for manufacturing companies.

A small recap — the key benefits of edge computing are:

  • Reduced latency
  • A decrease in the bandwidth used and the costs associated with it
  • A decrease in server resources and associated cost
  • Added capabilities and functionalities

What are the drawbacks of edge computing?

One drawback of edge computing is that it can increase attack vectors and bring some liabilities with it. With the introduction of more smart devices into the mix, such as edge servers and IoT devices which have robust built-in computers, there are new opportunities for malicious actors to compromise these devices and since most of the data collected by these would be streaming in nature, data security becomes absolutely paramount.

Another drawback with edge computing is the requirement of more local hardware. For example, while an IoT camera needs a built-in computer to send its raw video data to a web server, it would require a much more sophisticated computer with more processing power in order for it to run its own motion-detection algorithms. But the ever reducing costs of hardware will continue to make it cheaper to build smarter devices.

One way to completely mitigate the need for extra hardware is to take advantage of edge servers (more on this some other time).

Source: Most of the sources have been supported using embedded links. However, some of the statistics and numbers talked about in the beginning have been taken from here.

Disclaimer: Please note that I am not an expert and these are merely my opinions. Please don’t treat them as any professional advice. Any actions taken should be backed with research and professional guidance of your own accord and deliberation.

Hit the Applause, 👏, button if you would want to encourage a 22-year-old writer. Please remember that you need to be logged in for your claps to count.

— — — — — — — — — — — — — — — — —

Views expressed here are my own and don’t reflect the views of any of my employers, present or past.

📝 Read this story later in Journal.

👩‍💻 Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter.

--

--

Pratyush Choudhury

My not-so-profound thoughts on technology, business and life | IIT (BHU) | All opinions my own