I recently had the pleasure of sitting down with the smart folks at GSMA, to share my take on how edge computing will evolve for their global mobile radar report (you can find a copy of the material from the report below). Edge computing and its step-sibling, “fog computing”, are shrouded in the excitement and mystery of a new frontier, where trillions in a new addressable market will open up: and makers of digital picks and shovels are busy lighting the hearth of their software-defined forges, to design and build the edge infrastructure of tomorrow. Bold predictions are being made about the disruptive new applications that edge computing will unleash on willing and supplicant enterprises and consumers, who will lap it up unquestioningly with the hyper-energetic enthusiasm of your average, lovable, but slightly dim Labrador retriever.
It’s my rather controversial view that the edge will, over the longer term (10+ years), eclipse what we call the cloud: the giant centralized hyper-scale data centers, which offer a progressive set of abstractions as a service for running applications and storing data. These hyper-scale data centers are built on a simple zero-sum premise: whatever money you have in your budget, Amazon Web Services (AWS) will take it all by trading off your desire for simplicity against cost. Jokes about AWS aside: the cloud is a becoming a horrible place to compute. Its too far away from the most interesting things that are happening around us. It’s too slow to react to change in the real world. It’s too picky about what data structures it wants to eat and needs everything to be batched into giant balls of columnar data for it to map-reduce on.
The edge, on the other hand, is right next to where the action is. It’s already in the densest urban areas where 83% of humans and their devices live, and where commerce happens. Its there for metro data centers, cellular base station controllers, and radio network controllers and milliseconds away from your phone, drone, car, game console, and 3D-printing robot. The challenge is that the abstractions that serve us so well in building and running applications on the centralized clouds don’t work well (or at all) on the edge.
Edge computing is a distributed data problem (and not simply a distributed computing problem that spinning up containers on the edge will solve). Data (or state, to be more precise) is a pain in the *** (PITA) and our most advanced technologies for storing and processing data are built around models and architectures of centralizing it. If you can take your data (state) and make giant piles of it in one data center, it’s easy to do useful things with it: but, if you have a little bit of it spread everywhere, there is a horrendous problem of keeping everything consistent and coordinated across the locations that its stored on, so that you can achieve idempotent computing. For edge computing to become a reality, the distributed data, at large scales, must be solved. Once this is solved, we will see the edge eclipse the central cloud, to become the place where the world computes.
Solving this distributed data, at a very large scale, for the edge is a killer opportunity for edge computing (planet-scale where multitudes of edge locations around the world act in concert to provide a single coherent database). Being able to create an edge database with thousands (if not tens of thousands) of edge nodes that are able to achieve real-time consensus, provide idempotent computing across a multi-home, multi-master architecture is the holy grail, and, once that is solved, everything changes. There will no longer be a need to build massive piles of data in one location to compute on it, as you can move the compute to wherever the data is being created to unlock value from it.
Solving this challenge creates a progression of advancement in four phases.
- In phase one, we see edge applications being deployed to solve esoteric use cases which central clouds simply can’t solve.
- In phase two, we start to see hybrid edge and central cloud apps, where speciation and specialization emerge for things that are currently being done in batches in the central clouds or in real time in the edge.
- In phase three, we see the rise of native edge apps, where traditional “cloud native” apps are rebuilt to take advantage of the versatility of the edge, to create revenue streams from new services that only the edge can provide.
- In phase four, we reach a point of parity, where most apps are edge by default and the centralized cloud is the secondary tier for computing, much like how tape backup is for storing and archiving data.
I’ll end this post with a cautionary note on the dangers of “edge washing”:
- The edge is not a simple re-skinning of what worked in the cloud. If that were the case, the edge would already be here.
- Running code in containers on an edge device, like an IoT gateway or a metro data center, doesn’t solve the distributed data problem because it does not unbundle state and make it distributed.
- Calling something “real-time” doesn’t make it real-time unless the edge platform actually schedules processing with guarantees and prioritization of requests.
- But, most importantly, the edge is a distributed data problem: no matter how much one wishes that cloud architectures will generalize to the edge and that it’s simply a question of getting your containers to run on the edge, it’s not.
Chetan Venkatesh is the CEO and co-founder of Macrometa, a VC backed startup solving “the distributed data at planetary scale” problem. Macrometa is in stealth.