Caching Strategy with DynamoDB Streams, AWS Lambda and ElastiCache

Fernando Pereiro
Fernando Pereiro
Published in
9 min readJul 4, 2017

I recently had the privilege of participating in a workshop about ElastiCache, at the offices of Amazon Web Services in Madrid, a day attended by five other AWS fans and orchestrated by Mike Labib (@MichaelSLabib) — Global Specialist Solution Architect at Amazon Web Services. In addition to draining all possible knowledge of Mike (which is inexhaustible), the three-hour workshop helped me to realize many things, among which I will highlight two:

  • ElastiCache is awesome!! And personally, I think it is not given the proper importance it deserves, as we do not think about it as much as we should.
  • ElastiCache, combined with other AWS services, offers a lot of possibilities of implementing design patterns and deployments that we did not have before, so we can now look further in terms of performance and possibilities for our projects.

As an architect, one of the topics that most obsesses me are the caching strategies. Normally, the way to use a caching system is very simple: basically our front-end asks our back-end for data. If the data is cached it will be returned to us, but if it is not found you will get the data from its source (a database is usually the most common source), it will be stored on the cache and returned to us, remaining available in the cache for future requests. Talking about this strategy many of you will think about “Lazy Loading”.

This strategy has the advantage that it only caches the necessary information and, in addition, a possible downtime of the cache is not dangerous for the rest of the system; As a disadvantage, we find a latency considerably higher than the average when we ask for data the first time around and, in addition, we lose the data integrity, since the cache has no way of synchronising with possible updated information in the database.

Today we are going to get hands on lab with another caching strategy: “Write Through” (a little improved). This strategy consists of updating the cache at the same time as the database, whenever adding, modifying or deleting data on the database. Let’s do it with three pieces of AWS: DynamoDB, ElastiCache, and Lambda and we’ll do it in the most automated way. I do not want to say that this strategy is a solution for every situation, but I do believe that it is a more effective approach for many systems, although we probably have more volume of cached data, we gain in data integrity since the information in the cache will always be a faithful reflection of the database.

The strategy looks something like this:

And with the AWS services we will use, we will have the following:

We will start by creating our DynamoDB table.

In the AWS console we will go to DynamoDB and we will click on the button “Create table”.

In the next screen we will give a name to the table and a primary key (number), we will leave all the other options in their default values ​​and we will click on the button “Create”.

It will take a moment to create the table. In the meantime, without waiting, we will create our ElastiCache node.

After starting the process of creating a new cluster of ElastiCache we will indicate that we want to use the Redis engine without the Cluster Mode, we will indicate a name, change the node type to a t2.micro (free tier) and set the Number of replicas to “None”.

Next we will set the advanced settings, where we will do a couple of things: the first one is to define a group of subnets through which the cluster can be deployed (always a minimum of two subnets is recommended in a group, in this case I added three private subnets of my VPC).

It is important to keep in mind the VPC we are using, the subnets and Security Group at the moment of creating the Lambda function.

The following step is to indicate a Security Group, for this demo I have created a specific one.

We will leave all other fields in their default values ​​and create the cluster. This will take a little longer than the DynamoDB table, so in the meantime, let’s go where the magic happens: Lambda !!

Before creating the function, we will create the role that will assume the function when it is executing. Basically it is a role to which we will grant permissions on ElastiCache, DynamoDB Streams and VPC. This point is very important since it is where people usually fail and is where all the permissions to perform actions on all the pieces are indicated.

In the screen of role creation we will indicate that it is a roll for Lambda.

We will filter the policies and select Full Access for ElastiCache.

Because we will locate the Lambda function inside a VPC, the role we assign it must have the associated policy for executing it: When a Lambda function is executed inside a VPC, it uses an ENI to execute, therefore, the role must have the permissions needed to manage the ENI.

Our Lambda function will be triggered by DynamoDB, and therefore, need to get the necessary permissions to act on DynamoDB Streams.

In the following page we will indicate the name of the role and we will create it.

Now, let’s go for Lambda!

When working with Lambda, in many cases we simply need to enter the code that we want to run on the AWS console itself. However, in this case, we will create the Lambda function by uploading a deployment package of Node.js, since we will need to import the Redis client in order to perform actions on our cluster.

Here is the code: https://github.com/fepereiro/WriteRedisFromDynamoDB

I am not going to expand much on the code but I would like to talk about the most relevant aspects. If we take a look at the code of exports.js we will realise several things:

var redisClient = redis.createClient(6379, process.env.URL, {no_ready_check: true});

First, we will connect to the default port of Redis and use an environment variable to indicate the URL of the Redis host.

event.Records.forEach((record) => {

We will go through all the records that have been modified, introduced or deleted in DynamoDB, which will be received within the parameter of our function.

var key = record.dynamodb.Keys.Number.N;

For each record we will get its main key, which we have indicated at the time of creating the table.

if (record.eventName === "INSERT" || record.eventName === "MODIFY") {     var value = JSON.stringify(record.dynamodb.NewImage);     console.log('Inserting value: ' + value);     redisClient.set(key, value, function(err) {          globalCallback(err);     });     console.log('Value inserted.');} else if (record.eventName === "REMOVE") {     console.log('Removing key/value.');     redisClient.del(key, function(err) {          globalCallback(err);     });     console.log('Value removed.');}

And finally, depending on the specific event of each record, we perform the appropriate actions.

I recommend that you download the code and generate a .zip file with all the content in the repository; this .zip file is the one we will use when creating the Lambda function, so let’s go there.

Now it’s time to go to the Lambda service on the console, and create a new “Blank Function” type function.

We will indicate DynamoDB within the possibilities of triggers, by selecting the table that we have created previously and the starting position, enabling the trigger and clicking “Next”.

We must indicate a name to the function, select Node.js as Runtime and specify that as Code Entry Type we want to use a ZIP file, which is where we will upload the .zip generated from the code of the GitHub repository. Next we will create the environment variable that we have seen in the code, indicating its name and its value (the value must be the endpoint of your cluster that we have created further back. To get it just navigate to ElastiCache, select your cluster and there you have it as the Primary Endpoint).

We will indicate the handler we are using in our code, and select the role we have created previously.

In the advanced settings we will choose the VPC, subnets and Security Group that we used when creating the ElastiCache cluster.

The next thing is to finish the process, creating the function and test everything we have done !!

We will go to our “Players” table at DynamoDB and we will add a new Item with the following information:

At the time of saving the new item, DynamoDB will automatically trigger the Lambda function we have created and save the new data in our Redis cluster.

To check what happened, let’s go to CloudWatch.

Once in CloudWatch we will go to the Logs and filter to find what has been logged by our Lambda function. We will navigate through the sub-levels and we will find something like this:

At this point we can see that everything has been executed correctly and we can see in detail the traces we have left with the Lambda function.

I will leave it up to you to conduct all types of tests, creating new items on DynamoDB, modifying existing items and eliminating them. But regardless of the tests I want you to notice the execution time , it is incredibly fast and lightweight!!

As you can see there are two measures of time and two measures of size. The reason is that, on the one hand, we have what the function had consumed and, on the other, what will be charged to us by AWS. To understand this:

  • For billing purposes AWS rounds run times to the nearest 100 milliseconds unit (up).
  • Also for billing purposes AWS sets size ranges for executions. The smallest range is 128 MB.

Surely at this point you can think of many uses, many tests and a thousand other things, and I would love this to be your starting point as it was for me when I listened to Mike’s words at the workshop. A starting point to try new things and to consider new designs patterns and strategies.

In closing, here’s a challenge for you: How could we combine different caching strategies to obtain more reliable systems?

I hope you liked this article :)

--

--

Fernando Pereiro
Fernando Pereiro

Highly experienced DevOps and Cloud Solutions Architect.