AWS re: Invent 2020: A Strategic Analysis

It’s been a packed 3 weeks of AWS, with many announcements and much training material. Everyone’s been talking about the years of visiting Las-Vegas, spending the time crossing long-distance walks between the different keynotes, technical sessions, shows, and parties. The current online setup offers free registration and is spread across three weeks, which provides the ability to absorb and recap data after hearing all of these new announcements. So, after these three weeks packed with messaging and new announcements, what are the key takeaways? what are the main innovation announcements?

Zohar Friling
CloudWithMORE
8 min readDec 20, 2020

--

To cover this overwhelming number of innovations and improvements that were announced, I’d like to categorize the sections and analyze the sources of motivation for each of the sections. The following sections are based on my interpretation of the key messages that were presented by Andy Jassy and others in their keynotes.

Defining a Hybrid cloud strategy

The latest announcements like EKS Anywhere and ECS Anywhere which enable running containers on-prem and on other cloud providers utilizing the same APIs and methods. The new translator — “Babel Fish” which translates the T-SQL language used with MS SQL to PostgreSQL running on aurora. Providing this as open-source clearly presents a new strategic approach by AWS to tackle growing competition as well as on-prem operations, considering the fact that only 4% of total IT spending runs on the cloud.

Bringing AWS capabilities to the edge

In the last 3 weeks, we received a spate of announcements relating to new AWS capabilities on the edge, including specific hardware starting from the sensor level, through special appliances, and ending with compute capabilities, with the ability to run AI inference and logic. AWS is opening new compute locations and is in general, adding these compute centers as edges to its platform.

Focusing on the entire stack — starting from the silicon up

Over the years we’ve been accustomed to a flood of announcements during this week of AWS celebrations, and most of these were focused on new services such as databases and storage devices. This year, the flood covered the entire stack of services, including several new chip design families from intel or from AWS itself. Some of these new silicon chips will be used in specific AI stages and some will bring step-function improvements in efficiency. Other announcements present the regular mixture of new services; However, most are focused on streamlining usage and simplifying integration to the degree that soon there may not be any need for domain expertise.

From IaaS and PaaS to the service level

AWS announced new features on the service level itself, providing a complete set of possibilities in areas they are calling “undifferentiated heavy lifting areas”. For example — the announcement of services for call centers and IoT monitoring platforms. There is no doubt these solutions will disrupt the relevant markets and I am positive the vendors currently operating in these markets are trying to understand what “undifferentiated” means. To clarify, I’ll name a few end-to-end solution offerings that provide full turnkey platforms.

SO WHO’S DOING THE HEAVY LIFTING? source: businessoflawblog.com

In addition to the four main areas mentioned above, a more in-depth focus was added to the AI section. Before I address that, let’s expand on the announcements in each of the previous four sections.

Hybrid cloud strategy

One of the first slides that Andy Jassy presented stated that currently only 4% of IT workload is based on the cloud. This was the gateway for several announcements that define a new AWS approach towards this market segment.

The most prominent are these announcements

ECS Anywhere and EKS anywhere, which are also shared as open source.
On top of these tools that can help operate in hybrid infrastructure architecture, AWS presented several tools that can enable external vendor software to be easily migrated to the AWS platform. For example — “Babelfish” for Aurora eliminates the need to rewrite the relevant SQL queries at the application level and translates them on the fly. Other tools may open the AWS door for seamless integration with popular solutions or vendors. One such tool is an open-source project, termed PennyLane. Additional tools are Snowflake, MongoDB & DataBricks integration with the new SageMaker data wrangler and native connectivity with commercial data sources like Facebook, Salesforce, and others through the Redshift console.

Bringing AWS capabilities to the edge

These days the notion in AWS is that the edge is a somewhat flexible definition. In essence, even the data centers that host customers are edge locations. So the scale of edge definition ranges all the way from new local zones (In fact, three new local zones were opened during 2020 and there are 12 more to come in 2021!), and all the way to new IoT sensors. These are small physical hardware pieces that you can glue to any machine and they will automatically transfer temperature, vibration, and other data immediately analyzed for forecasting malfunctions. These sensors obviously come with a supported backend system that provides an end-to-end platform for failure analysis.

Between those two edges, there are many exciting announcements, like AWS Wavelength — a service offering compute to 5G devices over the wavelength zone without leaving the telecommunication network.

Another edge announcement was about AWS Outpost — hardware that will be shipped and managed on-prem by AWS, with the exact same APIs as on the cloud.

So now AWS has on-the-edge SDK sensors and appliances that can tightly and seamlessly integrate with AWS backend services.

Starting from the silicon up

Apple recently announced their new M1 processor, in order to drive higher performance and reduce the cost for consumers. AWS announced 2 new different CPU’s for 2 different distinct services and an additional one provided by Intel that will be launched soon — the Graviton 2 based C6GN including its amazing network throughput and amazingly high bandwidth to the EBS storage.

AWS Trainium instance — AWS Trainium is the second custom Machine Learning chip designed by AWS (It will join the first one — the Inferentia).

The intel chip will be based on the Habana Gaudi and will optimize AI training.

And as if all this wasn’t enough, we also have the Apple MacOS added to the list of EC2 instances family types.

From IaaS and PaaS to the service level

One step above in the stack is the serverless compute usage and AWS didn’t neglect this layer with announcements such as — per millisecond billing for lambda, and the ability to run Docker containers as Lambda functions.

If I had to choose only one announcement dealing with the storage layer, it would be providing SAN storage in the cloud as the most important this year. So, S3 multi-destination replication, tiered pricing for io2, and the new gp3 storage — all great features, but sorry, can’t beat SAN in the cloud.

So, these were my most important infrastructure layer announcements and as we progress higher in the stack, I want to add to the above — AWS Monitron. Monitron is an end-to-end system that will definitely shake the industrial machinery market and any industry that needs monitoring and AI at the edge.

Another amazing example is the announcement of AWS Connect — the Amazon call center service. This service alone received five new enhancements, mainly to its AI capabilities, making the overall solution more attractive.

The list of infrastructure announcements goes on and probably deserves a short blog post of its own. Read more about the list of new enhancements.

That being said, I’ll continue with the latest announcements related to AI/ML which for the first time had its own separate keynote event — and that’s a statement by itself. When AWS thinks about who the target audience of their AI/ML line of products is, they look at three types of engineers: AI/ML experts, Coders/developers, Data & Business analysts.

To map the overwhelming number of announcements and add my input as to where AWS is taking this ship, I’ll divide the announcements into four categories.

To be clear, I’m not going to list all the announcements, but rather I’ll focus on the ones that are important in my opinion. The list of enhancements and additional new services to the AWS suite is astoundingly long for the purpose of covering all of it.

Simplifying ML by infusing AI to all applications

The implementation of AI to any task will soon be the norm as it simplifies tasks that have dependencies on highly skilled people. Trying to simulate the decision making of an expert engineer to make this service available to be consumed out of the box or with just a few clicks seems to be the aim. It can be done in the working process of AI like the example of Sagemaker Data Wrangler for preparing the data for training, or Sagemaker Autopilot to automatically train your model.

This can be implemented with a new service from a different realm like DevOps Guru — a Machine Learning powered service that makes it easy to improve an application’s operational performance and availability.

Simplifying the data pipelines

According to AWS, a huge effort is needed to create an efficient and stable data pipeline. The enhancements by AWS in this field have several fronts, starting from simple enhancement to Sagemaker studio, to a new service that will be integrated with Sagemaker studio, such as Sagemaker Clarify, which can detect biases or drifts in the machine learning model.

In addition, other services such as databases or BI tools can initiate AI activities directly, like the integration of Sagemaker with Aurora, Athena, Redshift, Neptune, and others. This will help in shortening the data pipeline or even eliminating it completely.

Another new service is the Sagemaker Feature Store which is a managed repository for ML features. This eliminates the need to synchronize one or more distinct repositories per pipeline stage.

The honorable mention is definitely AWS Glue Elastic View, which simplifies building materialized views.

And finally, a new service named Sagemaker Pipeline, which brings DevOps capabilities to the suite of Sagemaker products.

Expanding data access to the line of business

With Amazon Lookout for Metrics, we won’t need any ML experience — meaning we can get immediate alerts. It can draw data from any AWS DB and external sources like Salesforce and transfer it to data auto pipeline while detecting any anomalies in the transfer. It can detect business anomalies, present root cause analysis, and send it to monitoring or alert systems such as Slack, CloudWatch, etc. …

In addition, AWS announced a new tool called QuickSight Q that enables the usage of natural language questions on top of the AWS BI dashboard and creating new graphs on the fly.

Extending AWS to the edge

I already mentioned that AWS extended its offering to the edge with several new IoT devices as well as the fact that they can be coupled with AI capabilities, which can be directly on the device itself or an appliance that sits next to it. The best example is AWS Panorama which brings computer vision to the edge with multiple use cases. An interesting use case presented, was the combination of AWS Panorama and Lookout for vision which is a new ML service for defect detections.

In summary, AWS has an amazing bandwidth for innovation, and its strategy can be derived from reading the exhaustively long list of innovations. We can continue to talk at length about the improvements, the innovations around the data plane & control plane, and the long list of new APIs.

The recipe for success in this area must include cloud scalability and the data layer, with AI sprinkled on top.

Already looking forward to next year’s AWS ReInvent.

Interested in continuing the conversation? Drop me a line or connect with me on LinkedIn. See you, Zohar.

--

--

Zohar Friling
CloudWithMORE

A tech strategy leader with extensive experience in executive positions ranging from small startups to large enterprises.