Rise of the Edge

IoT and deep learning demand a new breed of computing.

Baidu’s autonomous car in testing on public roads in Beijing.

Imagine a day in the future when autonomous vehicles ply our roads. On the surface, these vehicles may bare a resemblance to cars of today, but underneath they will be bristling with high-resolution sensors and will rely on state-of-the-art deep learning algorithms to maneuver through the world.

Sounds straightforward enough, but there is a problem. Barring an enormous breakthrough in communication systems, if we were to try to operate our autonomous cars through the current cloud computing infrastructure, the system would fail. The sheer volume and velocity of data generated would overload our networks and what’s more, the latency and faultiness of remote computing would be dangerous. What would happen if the car lost connection to its brain in the cloud? What if immediate action were required to avoid an accident but the response time were too slow?

The logical solution to these issues is that autonomous vehicles will carry their brains on board. As Peter Levine of Andreessen Horowitz puts it, the car of the future will be “a data-center on wheels.

As the number of devices and data streams surges, the cloud will be stretched beyond its limits.

These same problems will arise in many different applications as we begin to apply innovations in AI to the growing number of IoT devices. For now, many IoT devices rely on the cloud for storage and computation, but as these devices become more sophisticated and their number increases to 25 billion by 2020, the amount of data and computational demand will exceed the limits of the cloud infrastructure. Furthermore, in many applications such as fraud detection, insights will be more valuable the faster you process them, so latency poses a problem. Much like our car example, in order for devices of the future to incorporate AI capabilities, cloud computing will not be a viable option. More and more computation will need to take place near the source of the data, at the edges of the network. The emerging paradigm that encompasses this shift is called edge computing.

The advantages of edge computing over cloud computing are not hard to grasp, but it is still unclear how far the pendulum will swing in the direction of edge over cloud. Some claim edge will be the demise of cloud computing. I foresee a strong symbiosis between the two where cloud and edge cooperate by covering different computing purposes and handling different data-types. The edge will serve as the fast action, immediate intelligence of the network much like the reflexes of the human body. Meanwhile, the cloud will perform more of the slow thinking analysis like the human brain.

The human nervous system supports both local fast-action reflexes and slower central processing, similar to the different functions of edge and cloud computing.

Data will likewise be divided into two categories mirroring this division of fast and slow thinking. “Hot” data will be the time-sensitive data that edge devices will process and respond to. Much of this data will remain at the edge and not be stored beyond its lifespan of relevance. “Cold” data will be the data that lends itself to longer-term analytics, useful for optimizing edge algorithms and for generating long-range insights. This data will be sent up to the cloud and stored for later use. In this future, a large function of the edge computing infrastructure will be in performing data thinning, or sorting the “hot” data from the “cold.” And in turn, a large function of the cloud will be applying massive computational resources to the relevant data gathered from the edge in order to train new AI models to then be deployed back out on the edge.

The architecture of edge computing is also up for debate. A consortium has been convened to establish standards and develop the necessary technology for this purpose, but it is still early. While it may make sense for an autonomous car to carry computers on board, not every device will have the space required nor the same magnitude of computational demands.

Furthermore, most devices will only require computation while they are actively being used, meaning that computation at the edge could be shared. For example, while an autonomous car is inactive, it could loan its compute to nearby devices that need it. Another possible solution comes in the form of new router designs that companies like HP, Dell and Cisco are currently working on. These routers will be mini-servers in and of themselves, storing important data locally for faster access, providing computational power on demand to the devices that require it, and performing the aforementioned data thinning. If this becomes the case, we may see the emergence of local mini-cloud networks, what some have begun to call cloudlet or fog computing.

Asia continues to be a leader in electronics.

Here at Zeroth, we recognize that many large scale enterprise companies are getting involved in edge computing, but we believe that there is great room for startups to innovate in this space. Furthermore, we think that Asia is uniquely positioned to succeed in edge. Asia dominates the global electronics manufacturing market and has a depth of talent in electronic device engineering, a discipline that is no longer popular in other markets.

Zeroth is familiar with several fascinating edge computing startups in Asia which we’ll be covering in an upcoming post. We’ll discuss one of the most exciting frontiers of edge computing, model miniaturization for embedding artificial intelligence directly on devices, and how these startups are solving this problem.

— Nick White, AI Specialist at Zeroth.ai

Twitter: @nickwhite___, https://twitter.com/nickwhite___

Medium: @nickikwhite_5051, https://medium.com/@nickikwhite_5051