AWS re:Invent 2021: announcements of note

Jon Topper
The Scale Factory
Published in
5 min readDec 16, 2021
Jon hosting a panel on AWS Control Tower. Photo by @jasondunn

Now that the dust has had time to settle on 2021’s re:Invent conference, here are a few things that stood out to me from the myriad announcements that week.

Graviton3

Back in 2015, Amazon acquired microelectronics company Annapurna Labs. Since then, they’ve been beavering away creating custom silicon to improve how AWS operates. One such chip is AWS Graviton, a general purpose ARM chip built with modern web workloads in mind.

At re:Invent 2021, Graviton3 was announced, along with the preview release of the EC2 C7g instances that use it.

Graviton provides improved price performance for your workloads compared with x86 alternatives, and is worth looking into for that reason alone. Personally I have an emotional connection to seeing more and more ARM in the wild: I grew up using ARM computers, and the University of Manchester, where I obtained my Computer Science degree, has a strong history of collaboration with ARM.

As well as cost improvements, Graviton3 provides a new security feature: addresses pushed onto the stack are signed with a secret key. The processor will throw an exception if addresses popped from the stack have an invalid signature, protecting against some classes of security threat.

Price Reductions

One thing that cloud naysayers and those who worry about the pretend problem of “vendor lock-in” spend their time worrying about is cost: what if, once you’ve moved your workload to a single cloud, that vendor puts up their prices?

Well, I don’t think we’ve ever seen AWS put prices up, but we have seen them drop prices relatively regularly.

S3 storage is now up to 31% cheaper in the Standard-Infrequent Access and One Zone-Infrequent Access classes. S3 Glacier Flexible Retrieval is 10% cheaper.

And, no doubt in response to CloudFlare’s recent price reductions, AWS reduced pricing for data transfers out via Amazon CloudFront.

You don’t need to do anything to take advantage of these price reductions, they’ll be apparent in your next bill.

DynamoDB provides a Standard-IA (standard infrequent access) table class, offering savings of up to 60%. You’ll need to make some changes to your DynamoDB config to see these cost savings, but it looks like you can change the table class in-place. You’d use this class for data that, once written, doesn’t get accessed often.

S3

S3 is one of AWS’ oldest services, and is probably the biggest distributed system on the planet. The fact that it’s still evolving is always impressive to me.

When S3 was first introduced, it pre-dated AWS IAM, and provided security via Access Control Lists. Today you can disable the ACL feature to ensure that all of your S3 security configuration is handled by IAM.

At re:Invent we learned that S3 will soon be supported by AWS Backup making cross-account backup, and point-in-time restore available for your S3 data, using the same centrally managed AWS service that can also backup your other resources (RDS, EBS, Aurora, DocumentDB, DynamoDB etc).

Also new, changes to S3 objects can now result in events being sent to AWS EventBridge. EventBridge is a schema-based serverless event bus, and this change will make it easier for developers to build serverless applications on top of S3.

Finally, a new Archive Instant Access storage tier for S3 allows for savings of up to 68% compared with the Instant Access tier. Using the S3 Intelligent Tiering storage class, S3 objects can be moved to less expensive tiers as they become less frequently used. This new Archive Instant Access tier is, as the name suggests, intended for archive data that still needs to be available instantly on the few occasions it needs to be read.

Serverless

What “serverless” means is often open to interpretation, and depends on who you ask, but for the purposes of this discussion let’s go with “resources that don’t exist, or cost money, until they’re used, and whose underlying operation is someone else’s problem”.

Fargate, for example, is a serverless compute engine. You pay for Fargate when it runs containers for you, for the resources those containers use. Whilst the containers themselves are your responsibility, you don’t have to patch, back up, or otherwise maintain the resources that make up the Fargate service: AWS does that.

Announced in preview at re:Invent were a number of new services that you’ll soon be able to consume in this manner:

If you’ve had to run any of these services in their current “serverful” incarnation, you’ll know how much effort that is. These releases will be game-changing for some teams, giving them back engineering hours to spend on things that really differentiate their businesses.

FSx for OpenZFS

I haven’t ever personally run a ZFS filesystem, but those that have tend to swear by it. Originally developed by Sun Microsystems as part of its Solaris operating system, it eventually escaped the grip of commercial licensing in order to be made available as a first class filesystem for a number of other OSes, Linux amongst them.

OpenZFS is now available as part of AWS’ FSx product range and can be accessed from your compute instances over NFS as a feature-rich alternative to Amazon EFS.

AWS Private 5G

File this one under “you probably won’t need this” along with AWS Ground Station, but the fact that AWS are branching into this area is just generally interesting.

Now in preview, AWS Private 5G. This is a managed service from AWS, helping enterprises set up a private 5G network in their facilities. AWS will provision and manage the hardware and the network for you. I can imagine this being useful for communicating securely and at high speed with IoT devices across a manufacturing site, or maybe an agricultural operation.

Sustainability

Finally, a topic that’s top of mind for many people: sustainability.

A new Sustainability pillar for the AWS Well-Architected framework offers a new perspective on the trade-offs you might consider when designing your platform for sustainability. Many of these considerations overlap with the Cost and Performance pillars, of course.

As a practical example: if you have a dataset that can be easily derived from other data that you store, perhaps you can reduce your use of resources by not taking backups of that derived data.

In addition, in Peter De Santis’ keynote, he announced a Customer Carbon Footprint Tool, which will provide a dashboard to allow AWS customers to calculate their carbon emissions, and forecast how those will change over time.

Not sure if any of these new announcements are relevant to you? Why not get in touch and we’ll walk you through them?

--

--