From Cloud Computing to Edge Computing

The AWS global infrastructure now offers 16 regions and 73 edge locations — bringing innovative infrastructure everywhere

Julien Simon
A Cloud Guru
5 min readApr 5, 2017

--

AWS global infrastructure is spread across sixteen regions worldwide with two more regions in France and China expected in 2017

Since 2006, Amazon Web Services has been striving to bring customers innovative, highly available and secure infrastructure services.

Today, these services rely on global infrastructure spread across sixteen regions. Two more regions will go live in 2017, one in France and one in China [1].

Crucial as they are, these regions are not the whole story. Since 2008, AWS has also been building their own Content Delivery Network, named Amazon CloudFront. CloudFront enables customers to improve the performance of their applications by serving static and dynamic content as close to end-users as possible.

Thanks to CloudFront, in 2012, NASA shared worldwide the pictures of Curiosity’s landing on Mars. A huge event, which generated more traffic than the Olympic Games [2].

Following the March launch of two new locations (Prague and Zurich), CloudFront is now available at 73 locations worldwide.

CloudFront makes your web platforms more secure

The purpose of the AWS Edge Locations is not only to serve traffic, far from it. They also help customers raise the bar on platform security, as they’re integrated with a DDoS protection service (AWS Shield, launched at re:Invent 2016 [3]) and a Web Application Firewall Service (AWS WAF [4]).

Thanks to this defense in depth strategy, it’s possible to mitigate attacks as soon as possible, before they even reach customer infrastructure hosted in the AWS regions.

Running code in CloudFront edge locations: AWS Lambda@Edge

At re:Invent 2016, AWS has brought to Edge Locations the ability to run code on incoming and outgoing traffic.

With this new service, named Lambda@Edge [5], customers can now trigger AWS Lambda functions that process and modify HTTP requests sent to or coming from the origin, i.e. the server hosting the web application.

Thanks to the serverless architecture of Lambda functions, customers don’t have to manage any infrastructure: all they have to do is deploy their functions, which will be automatically distributed to the CloudFront Edge Locations.

For example, Lambda@Edge allows content to be customized depending on terminal properties: if the terminal is a smartphone, it may not be desirable to serve a very large, high resolution image. A smaller, lighter image could do and help optimize user experience by cutting down on transfer time, as well as save money by serving less data.

This service could also be used to authenticate users and filter unwanted traffic sooner, or even to run A/B tests by showing different content to different groups of users.

Edge computing: infrastructure outside of the datacenter

Lambda@Edge is a first step towards running code outside of AWS regions. Still, customers have very diverse use cases and infrastructure needs, which led AWS to design new services allowing them to process data as close the its source as possible, as soon as it is produced and without any need to transfer it to infrastructure hosted in AWS regions. These services even make it possible to process data in areas where network connectivity is sparse or even non-existent.

AWS Snowball Edge: your own piece of the AWS cloud

Launched in 2015, AWS Snowball is a portable storage equipment able to hold 100 Terabytes. Targeted at huge data transfers between on-premises infrastructure and AWS, it is used by Digital Globe, one of the main providers of space imagery and geospatial content, to transfer Petabytes to the Cloud.

At re:Invent 2016, AWS launched AWS Snowball Edge [6], a new version of Snowball able to run code locally thanks to an embedded Lambda architecture named AWS Greengrass (more on this in a minute). Thus, customers can now deploy on-premise storage and compute capabilities working with the same APIs as the ones they use in their AWS infrastructure.

Truly a little piece of AWS in your own data center!

Thanks to computing power equivalent to a 16-core, 64 GB RAM server, customers may apply complex processing to their data before sending it to AWS: compression, formatting, ETL, etc. And they don’t have to overload their own infrastructure to do it.

Snowball Edge is already used for mission-critical applications. For instance, Philips Healthcare is deploying it in Intensive Care Units, where medical teams use it to store, process and visualise vital signs of their patients: this guarantees uninterrupted care, even if the hospital faces a major IT outage. The US Department of Defense also relies on Snowball Edge to bring storage and computing capabilities to isolated areas [7].

AWS Greengrass: the marriage of Cloud and IoT

Integrated in Snowball Edge, AWS Greengrass is a service targeted at devices with limited hardware resources (1GHz CPU, 128MB RAM). It gives them the ability to run code locally, even if network connectivity is not available.

With Greengrass, IoT devices can store and process locally the data they collect. They can also talk to one another without going through the Cloud.

IoT devices can now be deployed to very remote areas, with little / infrequent / non-existent network connectivity.

Of course, whenever possible, Greengrass will allow IoT devices to connect to the Cloud, in order to synchronize and transfer data that has been processed and aggregated locally. Obviously, this will help cut down on bandwidth requirements and costs.

Infrastructure is everywhere

As you can see, traditional boundaries between infrastructure and devices are getting fuzzier every day. New use cases require local storage and processing capabilities, close to data sources and even when network connectivity is unavailable.

Since 2006, AWS has been working relentlessly to bring customers the innovative, highly available and secure services they’ve been requesting.

This is how Lambda@Edge, Snowball Edge and Greengrass were born and we’re impatient to see what customers will build with these new services.

It’s still day one.

--

--