Summary of AWS Community Day updates — Israel | Re:Invent Re:cap

Gilad Neiger
Develeap
Published in
6 min readJan 30, 2023

30/01/23, Monday

In this post, I would like to share with you some interesting notes from the AWS community day in Israel, Tel Aviv. The notes are touching different services on AWS, some might be relevant to you, and some may not. But I do think that first of all, it is good to know, and second, there are some features/improvements that can make our lives a bit easier.

RDS Blue/Green Deployments

By using Amazon RDS Blue/Green Deployments, you can take your managed database changes to the next level. Through this process, you create a staging environment that mirrors your current production environment, which is referred to as the “blue” environment. The new environment you will be working on, known as the “green” environment, is kept in sync with your production environment through logical replication.

With this setup, you can make any necessary changes to the RDS DB instances in the green environment without disrupting your live workloads. Upgrading your major or minor DB engine version, adjusting database parameters, or making schema changes — all can be done in the safe environment of the staging environment. Once you’ve thoroughly tested your changes, you can easily “switch over” and promote the green environment to be your new, live production environment. The switchover process takes just a minute or less and there’s no data loss or need for any application changes.

The green environment includes all the features of your production environment, including read replicas, storage configuration, DB snapshots, automated backups, Performance Insights, and Enhanced Monitoring. If your blue environment is a Multi-AZ DB instance deployment, then your green environment will be as well, ensuring a seamless transition.

Read more here: Overview of Amazon RDS Blue/Green Deployments

VPC Lattice — Simplify Networking for Service-to-Service Communication

With the rise of modern application architectures, it’s become increasingly important to have efficient communication between services. It’s crucial to have a way to discover where they are, authorize access, and route traffic between services. However, troubleshooting issues can be a time-consuming task, especially when trying to keep communication configurations under control.

This is exactly where VPC Lattice comes in, which Amazon has released recently. With VPC Lattice, you’ll have a simple and consistent way to connect, secure, and monitor communication between your services. You can define policies for traffic management, network access, and monitoring, so you can easily connect applications across various AWS compute services. No more worrying about network connectivity or address translation — VPC Lattice handles it all for you. And with its integration with AWS Identity and Access Management, you can use familiar authentication and authorization capabilities for your service-to-service communication. Plus, VPC Lattice gives you the control to route traffic based on request characteristics and even allows you to mix and match compute types for a given service. This can help you modernize your monolith application architecture to microservices.

Personally, I haven’t tried this feature yet, but it sounds interesting to check.

Read more here: Introducing VPC Lattice — Simplify Networking for Service-to-Service Communication (Preview) | Amazon Web Services

Step Functions Distributed Map — A Serverless Solution for Large-Scale Parallel Data Processing

With the Step Function’s map state, you can streamline your data processing tasks by executing the same steps on multiple entries in a dataset. However, the previous map state only allowed for a limited 40 parallel iterations, making it tough to handle large-scale data processing. But now, the new distributed map state takes your serverless application to the next level by enabling you to coordinate massive parallel workloads. You can process millions of objects such as logs, images, or .csv files stored in Amazon S3. The distributed map state allows you to launch up to ten thousand parallel workflows for optimal data processing.

You can use any service API supported by Step Functions to process your data, but most often, you’ll use Lambda functions and write the code in your preferred programming language. The Step Functions distributed map boasts a maximum concurrency of 10,000 parallel executions, which is a significant increase from many other AWS services. And, with the maximum concurrency feature, you can make sure your downstream service’s concurrency limit is not exceeded.

Read more here: Step Functions Distributed Map — A Serverless Solution for Large-Scale Parallel Data Processing | Amazon Web Services

Automated in-AWS Failback for AWS Elastic Disaster Recovery

With the new automated support, you’ll experience a smoother and quicker process for failing back your Amazon Elastic Compute Cloud (Amazon EC2) instances to the original Region. Plus, it’s easy to initiate both the failover and failback processes from the AWS Management Console, whether your recovery is on-premises or in-AWS. And, if you prefer a more tailored approach, the new DRS provides three APIs that allow you to customize the recovery workflow steps.

Read more here: Automated in-AWS Failback for AWS Elastic Disaster Recovery | Amazon Web Services

Lambda: Improving startup performance with Lambda SnapStart

Do you want your Java applications to perform faster with no extra cost? Lambda SnapStart can help you achieve just that. With the ability to boost startup performance by up to 10 times, you can enjoy lightning-fast latency without having to make any changes to your function code. I know that startup latency, also known as cold start time, can be a real pain point. But with Lambda SnapStart, the majority of that time is reduced as it takes care of initializing your function code, starting the runtime, and loading the function’s code. And the best part? All you have to do is simply publish a function version and let Lambda take care of the rest.

Read more:

Improving startup performance with Lambda SnapStart

Elastic Load Balancing capabilities for application availability

Since the last Re:Invent (2022), you have the power to control how your applications react to failures and recover faster with AWS LB's new features. Here’s what you can expect:

  1. Take control with Application Load Balancer (ALB) Cross Zone Off — You can now turn off cross-zone load balancing for even more zonal isolation and redundancy. Check out the details here.
  2. Network Load Balancer (NLB) Health Check Improvements — NLB allows customers to define health check intervals, specify HTTP response codes that determine target health, and configure the number of consecutive health check responses before a target is either health or unhealthy. Check out the details here.
  3. Set the minimum threshold of healthy targets with ALB and NLB — Customers can now configure a threshold for the minimum number or percentage of healthy targets for ALB and NLB in an AZ. When the healthy target capacity drops below the specified threshold, the load balancer automatically stops routing to targets in the impaired AZ. For details, see the documentation here for ALB and here for NLB.
  4. Zonal Shift for ALB and NLB [Preview] — Using Amazon Route 53 Application Recovery Controller’s zonal shift feature, you can recover from gray failures, like bad application deployments, by routing traffic away from a single impaired AZ. This feature is ideal for zonally architected applications using ALBs and NLBs that have cross-zone load balancing turned off. For details, read the launch blog, or see the Zonal Shift section of the documentation here.

And the best part? There’s no extra charge for these features. They’re available in all commercial AWS Regions and AWS GovCloud (US) Regions. The Zonal Shift feature is also available in preview in select regions.

Elastic Load Balancing capabilities for application availability

CloudWatch Internet Monitor

Amazon CloudWatch has recently launched its Internet Monitor service, which offers a view of internet performance and availability tailored to your AWS workload. Understanding the impact of external internet events on user experience is crucial for delivering a high-quality digital experience. Internet Monitor provides ongoing monitoring of relevant metrics, such as availability and performance, enabling you to track average internet performance over time and locate issues by both location and internet service provider (ISP).

Read more here: Introducing Amazon CloudWatch Internet Monitor | Amazon Web Services

Thank you for reading,

--

--

Gilad Neiger
Develeap

DevOps Group Leader, DevOps professional & 日本語の学生