Riding The Next Cloud: Why Edge Computing Is The Next Wave To Catch

Riccardo Di Blasio
A Cloud Guru
Published in
6 min readFeb 14, 2017
Maverick, Half Moon Bay, CA — 2017

They say “What goes around, comes around”. I remember my grandma back in Italy meticulously preserving her 60s / 70s dresses and accessories, pretending that one day they would be back in “fashion”. She was damn right! Teenagers wearing 1970s Adidas sneakers would agree, as Nike has learned at its own expense.

This is, of course, true for the IT industry as well.

Back in the late 90s, the word of computing infrastructure was under a secular shift from centralized mainframe systems into a distributed open systems — a shift pioneered by Sun Microsystems at the IBM expenses. I was right on the cusp of that innovation wave, and I saw the proliferation of a trend which lasted for almost two decades.

“Public cloud” computing, the next wave, re-asserted the importance of centralized resources, consumed through the Internet to deliver fast, agile SaaS applications.

Secular shifts don’t happen overnight, and when they do, generally they are not radical. In fact, as mainframe systems are still around today, distributed on-prem systems will not go away, but “as-a-Service” seems to be the new normality that will rule the world for the next decade of computing.

Like in fashion, however, a “new/old” trend is just around the corner, ready to happen and disrupt the industry yet another time: Edge computing.

Edge (or “fog”) computing simply means that an important part of the computational needs of modern applications is going to happen “at the edge”, or close to where the data is collected, or where the user consumes a particular application. It happens on your mobile phone, or inside your IoT, internet-connected devices around your house, or inside your car.

Peter Levine, one of the icons of Silicon Valley’s VC industry, recently described this trend as the “breath of IT”, or moving from centralized environments to distributed, and vice versa, almost in a round-robin cycle.

Why would you need Edge Computing? Not just to be able to process data in real time, without any latency issue, but also because certain type of applications greatly benefit from the ability to quickly respond to a particular data point. Take a driverless car, for example: do you want the AI driving the vehicle to respond in microseconds, or to wait the 150–250 milliseconds needed to talk to a far-away data center, assuming that the internet connection works? At 60 miles per hour, a 250 millisecond delay corresponds to 21 feet. It could be the difference between being able to dodge an obstacle on the road or not, or the difference between hitting a pedestrian, versus halting the car just in time.

But it’s not just about responsiveness. It’s also about analytics, and the fact that if you need to analyze a large amount of data, your internet connection might not be able to cope with the data flow, and it would result in your inability to extract real value from data. It’s always the use case, or workload (or the app) who drive the type of needed underneath computing, and not the other way around.

Let’s take a closer look at some use cases.

IoT: We are starting to be surrounded by devices which are constantly connected to the internet: in our homes, with Alexa, Siri, Nest, SmartBulbs; on our streets, mostly with cars, traffic signs and lights; in our skies, with drones and planes; but also in hospitals, police stations, schools, offices.

These devices will collect a ton of information, and will need to process that information in real time to provide a better service. Sometimes they will need to talk to each other (e.g. the Nest thermostat and the electronic window curtains), sometimes they will need to talk to a remote server, but most of the time they need to analyze and process data collected on site, and use that data to perform certain actions.

Data Sovereignty: public cloud creates a long list of privacy, regulatory and compliance issues, related to sensible or classified data. There are workaround solutions for that, with the usage of local service providers, or single tenant allocations that guarantees private access and control, but very often those alternatives come with the price of being not elastic, agile and pretty clunky (a revamped outsourcing model), and even if you can match all the above, unit economics behind it simply don’t make any sense.

Edge computing will allow individuals, devices, organizations to operate with the benefit of a “check-in / check-out” public cloud, but only using local computing proximate in that specific area, region, nation, domain, or whatever is the required security boundaries.

Unit Economics: Public cloud computing shines when it’s on-demand, for spiky, non-recurrent workloads. But every CFO knows that it is one of the most expensive way to do computing, if you need to run your workloads on it all the time. What inflates the bill are elements like access, data migration, I/O, bandwidth, latency, etc. Edge computing neutralizes most bandwidth bottlenecks, reducing latency at the minimum. Same for data movement and other functions. If you are an oil and gas company which is drilling in Angola and requires computing, today the alternatives are to either build your own data centers like in the 90s, (with all the cost, and scale limitations associated with) or to use a cloud provider (where the nearest datacenter will be probably in the UAE or South Africa, at least 5,000 miles away) with enormous costs and pretty lousy SLAs.

With edge computing, that will utilize-rent local regional infrastructure, you will be able to process data in real time while remaining local, at 1/10 of the cost of public cloud, while still maintaining that elastic flexibility that a cloud infrastructure is able to deliver.

Unfortunately edge computing will also open new challenges and risks, by the nature of his “indiscriminate computing

Think about any criminal or terrorist individual or organization that deals with dangerous or outlaw data in high risk countries. They have most likely the same computing needs of any other individual or organization but can’t use public cloud or even worse traditional IT. With edge computing they will have all the above, with the capability of renting computing by the minutes, and process any data in a “cache-only” modality, cancelling any track. Lot of food for thoughts here… and tons of opportunities for security, data monitoring/analytics companies here.

But how all of this will be technically made possible? Who owns it? Who manage it? Who control it? How does it work?

The user on one side……the provider on the other…..in a sort of peer-to-peer modality. There are already few startups that have built really interesting business models tacking inspiration from blockchain architecture technologies, the same powering the cryptocurrency Bitcoin, who can create those type of virtual or physical edge type of architectures.

One of the most interesting startup I had the privilege to look at is for example Servers.Global here in Palo Alto, or SIA.Tech in Boston. They both built an “Uber like” marketplace, where they connect on one side the “offering”, so every individual, or organization or even devices who want to rent extra spare computing capacity in their location, to any individual, organization or even device — the “demand” — that happen to need computing at that given time, in that specific location. Everything orchestrated by a bunch of algorithms and sophisticated software who gave the users the feeling of being capable to allocate and provide computing anywhere in the world, with the benefits of public cloud (or somehow better given the lack of bandwidth bottlenecks) but with better unit economics, instant access, and a more discreet privacy.

There is also another angle to study the impact that Edge will have in the next wave of computing, like environmental impacts. As we all know IT is already one of the major contributor to worldwide pollution and climate change. More massive datacenters are not a good news for the already delicate nature ecosystem.

According to the European startup GITG, their edge computing will save up to 10x the power for storage that traditional cloud needs. Their model is to provide IT as energy, wherever and whenever needed, by building local resources to be utilized as cloud.

Those startups are just few of the dozen that are already developing new systems, tools, applications to ride this new trend. The amount of new dynamics and opportunities but also challenges that will be created is absolutely humongous. If until now the total addressable market of technology was given by the amount of the entire world population (7+ Billions), edge computing will potentially serve a much bigger number of clients, potentially in the tens of Billions of IoT and connected devices all around the world.

A new wave is picking up, and seems to be a monster one, like at Maverick… So make sure your board is waxed, wear your wetsuit, watch out for sharks and rip currents and paddle as strong as you can… It’s going to be another amazing ride!

Riccardo Di Blasio

--

--

Riccardo Di Blasio
A Cloud Guru

Built with Italian passion + Silicon Valley technology + a stint of Boston. Addicted by family, tech, space, surfing, California and life in general