AWS Aurora Serverless V2 — What’s new?

Sam Gibbons
Contino Engineering
5 min readJun 15, 2022

--

Aurora Serverless V1 has been up and running since 2018, but due to significant limitations was a niche specialist product.
Aurora Serverless V2 is much more mature, but will it replace traditional provisioned Aurora db instances?

Quick overview of the products

Aurora Serverless tries to do for databases what Lambda does for EC2 — less management overhead, hyper-aggressive scaling with little configuration.

What does this mean in practical terms? You provision an Aurora Serverless cluster, configure the maximum and minimum of Aurora Compute Units (ACU’s) it can use, with each unit being 2GB of RAM and a equivalent amount of CPU, and then your database will happily scale up and down in response to demand, even scaling down to 0 ACU resources if you configure it to.

What’s new in Aurora Serverless V2?

Scaling is much improved

Aurora Serverless V1 scales automatically, but it cannot do this whilst there are connections to the database. To scale, the database will either:

  • Wait until a time where there are no in-progress transactions, which means your database must be low volume when you need it to scale out

Or

  • You can configure the database to scale regardless, and when the scaling happens, in-process transactions will return errors, so you need to design all your applications to handle these errors.

Aurora Serverless V2 can seamlessly scale up and down without affecting any transactions at all.

True failover and read replicas

An Aurora Serverless V1 cluster operates as a single read/write instance with no read replicas. In the event the master fails a new master will be created, but during this time you have no database.

V2 clusters have full failover and read replica feature parity with regular provisioned Aurora clusters. You configure the number of read replicas you want, and if the master fails, one of the read replicas acts as a hot spare and there is minimal application disruption.

Aurora Serverless V2 and provisioned DB capacity can be mixed in same cluster

Aurora Serverless V1 clusters were completely separate from regular provisioned databases, and they could not be mixed natively.

V2 allows you to use a mix of provisioned database instances and serverless db instances in the same cluster, allowing you to build burst capacity into your traditional database clusters.

Much more feature parity with provisioned Aurora Instances

Aurora Serverless V2 is seamlessly compatible with many provisioned Aurora features which V1 is not, such as:

  • Aurora Global Databases
  • AWS IAM Auth
  • Performance Insights
  • RDS Proxy
  • Many more configuration parameters

More granular scaling

Aurora Serverless V1 will only scale by factors of two: If the database has 4 ACUs and needs to scale, it will only scale to 8 or 16 ACUs. Serverless V2 can scale by however many ACU’s as it wants, even a half-ACU (0.5 ACUs). This promises to make Serverless V2 more efficient with resources.

What are the limitations of Aurora Serverless V2?

Eye-watering pricing

Aurora Serverless is astonishingly expensive, at the time of writing, each GB of Serverless V2 RAM is twice the price of V1 and more than 3 times the price of provisioned Aurora capacity:

(eu-west-2 region, Postgres-variant)

Given that the whole point of a complicated scaling product is to save money, this is a significant detriment to the product.

The pricing problem is further exacerbated by the fact that savings plans do not apply to Aurora Serverless, and neither do RDS Reserved Instances, unlike traditional Aurora provisioned instances.

Black-box autoscaling

The whole product only makes sense if the autoscaling works very well. Good autoscaling is notoriously hard to get correct, but AWS don’t expose configuration options for the autoscaling. For example, you can’t configure the databases to scale out faster for services with very bursty demand — if you have a problem with the default scaling configuration, there are no easy solutions.

No more Data API

Serverless V1 has “Data API” which exposes the serverless database over a simple HTTP endpoint and grant access to the endpoint with IAM permissions.

This HTTP endpoint was accessible to Lambda functions without attaching them to a VPC, which makes granting access to the database very easy — normally you’d either need to run the Lambda in a VPC with access to the database, or proxy the traffic to the database through a server.

Data API also pools database connections, which solves a problem if you’re planning to connect to the database with hundreds of Lambda executions simultaneously — each one will create it’s own connection.

Serverless V2 does not have Data API, but is compatible with RDS Proxy, which solves the pooling issue, but not the connectivity issue so Serverless V2 is actually slightly harder to use with Lambda Functions.

Sizing limitations

Each Aurora Compute unit is 2 GB of RAM.

Serverless V1 allowed you to scale down to 0 ACU’s (no resources) and automatically scale up when requests were sent to the database, similar to how AWS Lambdas can go dormant when no requests are being made.

Serverless V2 instances will allow you to scale as low as 0.5 ACUs (but not 0), and as high as 128 ACUs, which is 256GB RAM.

This means Serverless V2 cannot be as efficient for super-low traffic as Serverless V1, and cannot scale as high as the largest provisioned Aurora databases instances, which have 768GB of RAM.

Auto-scaling in comparison to Provisioned Aurora instances

Aurora Serverless V2 has feature parity with Aurora RDS in most ways, except for the auto-scaling and the pricing.

The pricing differences are addressed in the limitations section.

The autoscaling for Provisioned Aurora instances only works horizontally, i.e, the cluster can only scale by adding new replicas, not vertically — replicas are not increased in size. The maximum number of replicas you can have in an auto-scaling group is 15.

This is the opposite to V2 autoscaling, for which the number of replicas is set, and the size of the databases is the thing that changes.

For read-heavy loads, V2 autoscaling doesn’t appear to offer any significant benefit, as adding and removing read replicas to/from a cluster is non-destructive — providing you’re using the read endpoint of the database and not connecting to single instances directly.

For write-heavy loads, V2 autoscaling has a significant advantage over provisioned autoscaling, as there is no way to horizontally or vertically scale the write capacity of a provisioned database cluster without disruption, and V2 promises to do that seamlessly.

Is V2 Autoscaling Useful?

Potentially.

This new version changes Aurora Serverless from something that you need to specifically design your applications around to something that, in theory, is plug and play for most existing database usage. It has feature parity with most provisioned Aurora features, and can even be used inside an Aurora cluster to lend it additional autoscaling functionality.

This product is a bit more complicated than regular provisioned Aurora Clusters, but it allows proper seamless autoscaling of database resources, which is currently a disruptive action if you want to scale write replicas.

The removal of Data API and scaling to 0 ACU’s, as well as the price increase, make it less useful for pure serverless apps or extreme low-volume usage.

The cost is significant, as is the fact that the autoscaling configuration cannot be tweaked, but for very bursty write-heavy loads, this could be a significant cost saver, the hefty price just makes it a tool that must be justified in its use.

--

--