Taming of the Queue

Tal Maoz
cisco-fpie
Published in
29 min readAug 10, 2022

Building Dynamic Data Pipes Using AWS DocumentDB, MSK and Lambda

There are many data driven applications that require online processing of data as well as storing the raw data. Some examples include recommendation engines, IoT processors, event-driven services and more. If you’ve ever needed to build some kind of data processing pipeline or workflow, you’re probably familiar with the challenges of handling multiple data types while accounting for the possibility of new data types coming in during the lifetime of your system.

In this article I will discuss a solution I came up with for a recent project using AWS DocumentDB, MSK and Lambda functions, and will provide instructions for deploying a simple pipeline along with useful Go code snippets.

The Design

When building a data processing pipeline, one typically needs two components: data transmission and processing blocks. Processing blocks are manipulations applied to the incoming data in a given sequence that may change depending on the type of data, while the data transmission is the means by which data is moved between the various processing blocks. You can think of this as a production line where the conveyor belt moves the products in their various stages between the various stations until the final product comes out the end of the line. The conveyor belt is our data transmission, and the stations are the processing blocks.

More often than not, you also want a means of storage for the raw data because you never know what you may want to do with it later on, or if some error would force you to run your data through the pipeline again.

My design requirements were simple:

  1. Build a fully managed system
  2. Keep a copy of all the raw data
  3. Process data in near real-time and in low-latency
  4. Account for the possibility of new data types coming in over time without requiring system downtime

After looking into several options, I settled on the following design for my pipeline:

Architecture Diagram

Let us start by going over the building blocks we will use to understand what they are and how we can use them:

AWS DocumentDB is Amazon’s managed MongoDB service based on MongoDB 3.6 or 4.0. As such, it can store, query and index JSON and BSON data. MongoDB is a source-available NoSQL JSON database that uses JavaScript as the basis for its query language thus allowing you to also run JavaScript functions server-side. Within a MongoDB deployment, one can define multiple DBs with multiple collections in each one. One useful feature introduced in Mongo 3.6 are Change Streams. Change Streams allow applications to access real-time data changes by registering for events on specific collections. The event notifications can be configured to include the deltas or full documents and capture “insert”, “change” and “delete” events.

AWS MSK (Managed Streaming for Apache Kafka) is Amazon’s managed Kafka service. Apache Kafka is a widely used open-source distributed event store and stream-processing platform. Kafka is designed for high-throughput low-latency real-time data processing. With Kafka you can define “topics” to which you can publish key/value messages. Multiple publishers can publish to a topic and multiple consumers can consume it. Kafka topics are managed using a ZooKeeper cluster while publishing and consuming is done via Kafka brokers.

AWS Lambda is a serverless, event-driven compute service that lets you run code in response to various events and triggers while automatically managing the underlying compute resources required by your code. Lambda supports many programming languages including Node.js, Python, Java, Ruby, C# and Go. A very useful feature of Lambda is the ability to trigger functions on Kafka topics, which makes it ideal for usage as part of a data processing pipeline.

AWS VPC is a way to create Virtual Private Networks within AWS. This allows you to contain services in a more secure environment and gives you total control over who and what has access to the network and the resources within.

AWS EC2 is Amazon’s Elastic Compute Cloud where you can deploy virtual or physical instances (computers) and manage their security and networking properties.

AWS S3 is a Simple Storage Service. With the S3 object storage you can create buckets and then store files and folders within them. S3 gives you full access control and security.

AWS IAM is Amazon’s Identity and Access Management system. IAM lets you manage users, roles, and policies so you can achieve fine grained access control over your resources and grant access to other users and AWS accounts in a secure way.

AWS CloudWatch is Amazon’s observability platform where you can aggregate logs from AWS services and easily filter and search through them.

Putting Things Together

The idea is that DocumentDB is used as the entry point into the pipeline while MSK acts as the data transmission. Each path between processing blocks is implemented using a Kafka topic. One processing block publishes its output to the topic while the next block in line consumes the topic to get its input. The first processing block will act as a “router” that analyses the new data and decides what type it is. It the published the data to a dedicated topic for that data type so the proper processing blocks can be used on that data.

I start by inserting a new piece of raw data into DocumentDB. Next, I use an MSK connector to register to a change stream for my DocumentDB collection and push the newly inserted documents into an initial MSK topic, which is the input to the “router”. Then, I configure a Lambda-based “router” function to consume the initial MSK topic, analyze the messages and publish each one to a dedicated MSK topic. Finally, for each data-type dedicated topic, I would have a specific Lambda function that knows how to process that data. I can then continue building more processing elements on the pipeline as required.

Once all the pieces are in place, all I have to do to run new data through the pipeline is simply insert it into my DocumentDB collection. From that point on, everything happens automatically. Moreover, by using the combination of Kafka topics and Lambda functions, I can dynamically create topics for new types of data messages and then define handlers to process them. The messages would wait in the topic until I build a processor that can handle them, and as soon as I deploy the new processor, it can start processing the messages, which means messages are never lost. This design also allows me to dynamically change the layout of my processing pipeline over time.

In order to configure DocumentDB and MSK, I make use of a bastion instance that I deploy on EC2. This instance allows me to connect to my VPC using a secure SSH connection as well as use port forwarding to give my local environment access to the VPC.

I use an S3 bucket to store the Kafka connector package as well as my Lambda functions’ code package. In addition, I use IAM to create the required execution roles for the Kafka connector and the Lambda functions.

Finally, I use CloudWatch to gain visibility to what the Lambda functions are doing by funneling the Lambda logs into CloudWatch log groups.

Let us now go over each of the components and see how to provision and/or deploy them.

VPC Gateway

DocumentDB as well as MSK are both deployed only in a VPC. Thus, in order to connect to them from your local machine for development, testing and debugging, you need to create a gateway into your VPC. We will use the default VPC, but any VPC can be used instead. Please refer to the AWS documentation for information on how to create a new VPC in case the default one is not appropriate.

We start by creating an EC2 instance that we will use as our gateway. Simply launch a new EC2 instance in the default VPC and choose Ubuntu 20.04 as the OS image (there is no support for the mongo CLI in Ubuntu 22.04 at the time of writing this article):

Launch New EC2 Instance

Next, create an SSH key-pair you will use to access the new instance from your local machine. Click on “Create new key pair” to create a new key pair and download the public key, or choose an existing one:

New EC2 Instance SSH Key Pair

Next, we look at the “Network Settings” section. Make sure you select the VPC you wish to use and the security group:

New EC2 Instance Network Settings

Finally, launch your new instance. Once your instance is up, you can SSH into the new instance:

SSH Into the New Instance

DocumentDB

Now that we have a VPC, we can start looking into deploying DocumentDB

Architecture — DocumentDB

Start by going into the Amazon DocumentDB dashboard and click on “Create Cluster”. Give your cluster a name, make sure the selected engine version (MongoDB version) is “4.0.0” and select the desired instance class.

The connectors we will use to let MSK register to the DocumentDB change stream require that the MongoDB deployment be part of a replica-set so make sure the number of instances is greater than 1:

DocumentDB — Launch New Instance

Next, under the authentication section, fill in the admin username and password you would like to use in your cluster.

Now, click on the “Show advanced settings” toggle at the bottom to open the network settings and make sure that the selected VPC is the same as the one in which you deployed your EC2 instance:

DocumentDB — New Instance Network Settings

Tweak any other settings and then click on the “Create cluster” button at the bottom to launch the new cluster. The process takes a few minutes and then you will see the following or similar according to your choices:

AWS Document DB Cluster

To test our new cluster, we need to install the mongo client in our EC2 gateway instance. Follow these instructions to do so: https://www.mongodb.com/docs/mongodb-shell/install/

Next, go into the cluster details in the AWS console and follow the instruction to download the CA certificate to your EC2 instance and then run the newly installed mongo client to connect to your new cluster:

Connecting to DocumentDB using mongo shell

If the connection fails, you may need to manage the DocumentDB cluster’s security group to allow access to your EC2 instance’s security group. To do so, go into the cluster’s details and scroll down to the “Security Groups” section:

AWS Security Groups

Select the security group and then go into the “Inbound rules” tab:

AWS Security Group Inbound Rules

Click on the “Edit inbound rules” button to edit the inbound rules and then add a rule that allows the traffic type you need (if you need to specify a port, use 27017) from a “Custom” source. In the search box, search for and select the security group you used for the EC2 instance:

Adding Inbound Rule to AWS Security Group

Finally, save the rules. You should now have access from your instance to DocumentDB.

To make development and debugging easier, you may want to use a tool such as Robo 3T,that lets you access MongoDB using a nice GUI that is intuitive and easy to use and that lets you view and manage data conveniently. You will need to forward port 27017 from your local machine to DocumentDB via the EC2 instance using SSH:

SSH to the Gateway with Port Forwarding to DocumentDB

Now you can configure your local Robo 3T or mongo client to access port 27017 on your local machine:

Robo 3T — New Connection

For Robo 3T, make sure to allow invalid hostnames because your local hostname is different than the one in the CA certificate:

Robo 3T — setting CA Certificate

Now, we can create a new Database called “pipeline” and in it a collection called “intake”. We also create a new user called “puser” that has read permissions for the “pipeline” database:

Robo 3T — After Creating A Collection and A User

The last thing we need to do is enable change streams on our new collection. To do that we need to connect to DocumentDB as we did above and then run the following command:

If you are using Robo 3T as I do, right click on the “pipeline” database in the tree on the left and then select “Open Shell”. Now you can enter the above command and use CTRL+ENTER to execute it.

MSK

Now that we have DocumentDB set up, we can move on to MSK.

Architecture — MSK

We will be deploying our MSK cluster using the “Quick create” option. For this little demo we will use the “kafka.t3.small” flavor and allocate only 1GB of space. If you need to change the network settings to choose a different VPC, zones and subnets, you will have to switch from “Quick create” to “Custom create”.

In any case, make sure that your MSK cluster is in the same VPC and subnets as the gateway EC2 instance. Otherwise, you would have to start configuring routing between the VPC or subnets, which we will NOT cover in this article.

When done, click on the “Create cluster” button and wait until your cluster is up:

MSK Cluster

To test our cluster, we need to get the brokers’ addresses. Click on the cluster, select the “properties” tab and scroll down to the “Brokers” section. There, you will find a list of the brokers that have been deployed as part of the cluster:

MSK Cluster Brokers

Managing a Kafka cluster is done using the Kafka CLI tools. The CLI tools require the Java runtime as Kafka is written in Scala which runs on a JVM (Java Virtual Machine). We are using openjdk-8-jre. Now, download the Kafka package from https://kafka.apache.org/downloads to the EC2 instance and extract it. For this document, we are using Kafka 2.13–3.1.0 .

Next, use the “kafka-topics” command to get the list of existing topics. You need to provide a bootstrap server, which can be any of the brokers in the cluster (we use the first one):

List Kafka Topics

Please note that port 9092 does NOT use TLS. If you wish to use a secure TLS connection, you should follow these steps:

  1. Create a client profile:

2. Create the initial trust store:

Note that the location of the “cacerts” file will change according to the JRE you installed on your machine.

3. Finally run the command as follows:

Now that we have our MSK cluster deployed and accessible, we can create our initial topic:

Let us publish a test message to our new topic:

We can see that our new message was, indeed, published and can be consumed as well.

But how do we clear out a topic? There is no way to directly delete messages from a topic. Instead, we have to change the retention policy and wait for Kafka to delete all the expired messages for us before we can restore our original retention policy.

First, we get the current settings and see that there is no policy set:

Thus, the default policy applies which is 7 days (https://docs.confluent.io/platform/current/installation/configuration/topic-configs.html). We now change that policy to one second:

It takes a little while for the new policy to take effect, but once it does, we can run our consumer and see that there is nothing in the topic:

Finally, we can restore the default policy:

S3

If order to create a Kafka MongoDB connector and to make reusing Lambda code easier, we need to create an S3 bucket where the code packages would be kept.

To do that, start by going to the S3 dashboard and create a new bucket by clicking the “Create bucket” button. Fill in the name for your new bucket, select the requested region and then scroll down and click on the “Create bucket” button. We keep all the default options for now, but you can play with them later if you wish to change anything.

IAM Execution Role

Our next challenge is tying DocumentDB to Kafka so that inserting new documents into DocumentDB would automatically put notifications with the full document data into a given Kafka topic. For this we are going to use a Kafka connector that will register for a Mongo change stream for our collection and then publish the new documents to the chosen Kafka topic.

We will start by creating an IAM execution role for our new connector. Note, that when creating a connector, AWS will give you the option to create an execution role. However, it turns out that due to some changes made by AWS to how execution roles work, using this option results in a Service Linked role that is not usable by MSK Connect. AWS is aware of this issue but has not fixed it as of the date of writing this article. So, we need to create our own role manually…

Go to the IAM console, select the “Roles” section on the left and then click on “Create role” on the top right:

IAM — Create New Role

Next, select the “AWS account” option and then click on “Next”. At this point you can select a policy to use. None of these policies are good for us so just click on “Next” again. Now give your role a name and description and then scroll to the bottom and click on “Create role”.

Now that we have a role, we need to configure the proper permissions so find your role in the list of roles and click on it. Under the “Permissions” tab click on the “Add permissions” button to open the drop-down menu and select “Create inline policy”:

IAM — Creating a New Policy For MSK Connect

We would now like to manually enter a policy so select the “JSON” tab and then replace the existing empty policy with the following one:

Now click on “Review policy” and then give your new policy a name:

IAM — Review Policy

Note that this policy grants many permissions. We do this for simplicity, but you may want to experiment and limit the permissions you grant for better security.

Finally, click on the “Create policy” button at the bottom. You will now be able to see your new policy listed in your role:

IAM — New Inline Policy

Next, we need to add the proper trust policy so click on the “Trust relationships” tab and then on “Edit trust policy”. In the editor that opened, replace all the text with the following:

Finally, click on “Update policy” to finalize the new role.

CloudWatch

In order to keep track of what is going on with our connector and Lambda functions, we need a place to keep our logs. We will use CloudWatch and so we need to create a log group.

Go to the CloudWatch dashboard, select “Logs” on the left and then “Log groups”. Click on “Create log group” on the right, give your log group a name and a retention setting, and then click on “Create” on the bottom and you are done.

That was simple!

MSK Security Group

Another thing to tackle before we can create a Kafka connector is configuring our security group to allow internal communications.

Go to your MSK cluster’s configurations, click on the “Properties” tab, and then click on the security group that is applied. If you followed this article, you should only have a single security group applied.

Once you get to the EC2 dashboard and to the security group settings, you need to click on the “Edit inbound rules” button to add the required rule.

Now, click on the “Add rule” button on the bottom left, select “All traffic” for the rule type and then find your security group in the custom “Source” search box. Make sure to select the same security group as the one you are currently editing. Note that the name of the security group appears in the navigation bar on the top left of the page.

Finally, click on “Save rules” and you should be set.

MSK and DocumentDB Security

Now I must point out a slight problem with the DocumentDB connection. We used the default DocumentDB configurations that enables TLS. This means that in order to connect to DocumentDB, we needed to supply the client with the CA file we downloaded from the DocumentDB dashboard. However, since MSK is a managed service, we have no way of installing these certificates in the new plugin. Furthermore, while there is a way to specify the CA file within the MongoDB URI, the current MongoDB driver used within both the Confluent and Debezium connectors simply ignores this option and/or the CA file if we try to include it in the JAR file or in a ZIP file that holds both. If any readers are aware of a way to do this, please let me know so I can update this document. The only other option would be to implement our own connector that would contain the certificates and use them without relying on external files or certificate registries, but this is out of scope for this article.

Thus, we first need to turn off TLS in our DocumentDB. For this, go back to the DocumentDB dashboard, select “Parameter groups” from the left side menu and then click on the “Create” button on the right.

DocumentDB — Creating a New Parameter Group

Fill in a name for the new parameter group, add a description and click on “Create”.

Next, click on the new group from the list, then select “tls” from the new list that opens, click on the “Edit” button on the top right of the screen:

DocumentDB — Modify the “tls” Parameter

Set the selection to “disabled” and click on “Modify cluster parameter”.

Now, click on “Clusters” from the left side menu and then click on your cluster. Go to the “Configuration” tab and click on the “Modify” button within that tab:

DocumentDB — Modify Cluster Options

Under the “Cluster options” section, select the new parameter group that we just created and then scroll down, click on the “Continue” button and finally click on the “Modify cluster” button. This will modify the settings and take you back to the cluster list.

However, the new settings will not take effect until you reboot the cluster. If you click on the cluster again, you will see that the summary section indicates “pending-reboot”:

DocumentDB — Cluster Pending Reboot

Go back to the cluster list, select the cluster by clicking on the checkbox next to it, then click on the “Actions” button to open the menu and select “Reboot”. The cluster will now reboot and in a few minutes will be ready for work.

Kafka MongoDB Connector

Architecture — Kafka MongoDB Source Connector

There are two options we can use, the Confluence connector and the Debezium connector. Both are Java based but the Confluence connector is easier to use so we will focus on that one and mention the differences in the Debezium connector briefly.

Do NOT go to https://www.confluent.io/hub/mongodb/kafka-connect-mongodb/ to download the connector from there. Although we ARE going to use this connector, the version you will find there is designed specifically for the confluence cloud and so missing some dependencies required by MSK that are provided by the Confluence cloud.

Instead, go to the Maven repo at https://search.maven.org/search?q=a:mongo-kafka-connect, click on the “Download” icon on the right and select “all”. This will download a JAR file that includes all the required dependencies. Upload this JAR file to your S3 bucket.

In MSK, you first need to create a plugin, and then a connector which is an instance of the plugin. In our case, MSK does not have a built-in MongoDB plugin and so we need to create a custom plugin. Fortunately for us, MSK can wrap the process of creating both plugin and connector into a single sequence.

Go to the MSK dashboard, select “Connectors” from the left side menu and then click on the “Create connector” button. You can see that MSK takes you to the “Custom plugin” screen to first create the new custom plugin. Select the “Create custom plugin” option, and then click on the “Browse S3” button to find your S3 bucket and select the JAR you just uploaded. Next, give your plugin a name and add a description, and then click on “Next” to start creating the connector.

MSK — Create a Custom Plugin

To create a connector, start by choosing a name and add a description. Then choose you MSK cluster from the “Apache Kafka Cluster” list and the “None” authentication method as our plugin does not support the IAM authentication.

MSK — Create a New Connector

Now we need to configure our connector. You can find detailed configuration information in the MongoDB connectors documentation site and more information about MongoDB and change stream settings in the MongoDB documentation.

We want to monitor the “intake” collection in the “pipeline” database and publish new documents to the pipeline.intake.inserts topic. We also want to poll the change stream every second (this might be high so consider reducing the polling frequency according to your application) and get the results in JSON format. The following configurations specify these choices:

Note that for <YOUR_CONNECTOR_NAME> you have to use the exact same name you chose for your connector. Also make sure to use your actual password instead of <YOUR_PASSWORD> in the URI.

We leave the rest of the settings in the page on the defaults, but you can try and change them according to your needs. We need to give our connector access permissions using an AWS IAM role, so we choose the IAM execution role we created before and then click on the “Next” button.

The next section deals with security and the defaults here are good for us, so we touch nothing and simply click on the “Next” button again.

Now we need to choose where to send logging information. We previously created a CloudWatch log group and now is the time to use it. So, choose “Deliver to Amazon CloudWatch Logs” and then select the log group using the “Browse” button.

MSK — Sending Connector Logs to AWS CloudWatch

Click “Next” one more time to get to the “Review and create” screen. This screen shows you a summary of your choices and configurations and gives you the ability to edit things you missed. After making sure everything is as it should be, click the “Create connector” button to finish the process. Your new connector will now be created. This process can take a few minutes.

You can go to the CloudWatch console and select your log group to watch for progress. First you will see a new log stream titled “log_stream_created_by_aws_to_validate_log_delivery_subscriptions” appearing to indicate that the connector has permissions to log to CloudWatch. If you never see this, you need to go back and check the execution role settings to make sure you got them right.

After a couple of more minutes, you should see a log stream titled something like “medium-connector-33190fb9-ae60–471b-8a8f-412186b023ce-3”. If you click on this log stream you will be able to see all the output from your new connector as it initializes. If you see any errors during initialization, which may be in the form of Java exceptions and stack traces, you probably missed some of the steps above so go back and make sure you configured everything correctly. Note that you can NOT modify an existing connector so you would need to delete it once it reaches a “failed” state and create a new one instead.

If everything works and the connector was able to connect to DocumentDB and initialize the change stream, you will see messages like these appearing:

MSK COnnector Logs in CloudWatch

Your plugin is now ready for work!

We can run the CLI Kafka consumer as before and then use our MongoDB client to insert some documents:

Inserting New Documents to DocumentDB using Robo 3T

The consumer will then show us that the connector picked up the documents and published them to our chosen topic:

Kafka CLI Consumer Showing the New Documents in the Topic

The connector is working as expected!

As for the Debezium connector, the documentation and download link can be found here. Once downloaded, extract the archive, and upload the JAR to S3 so it can be used in MSK as with the Confluence connector.

The configurations are a bit different for the Debezium connector:

Unlike the Confluence connector, the Debezium one does not let you set a suffix for the topic name but rather uses the “mongodb.name” logical name as a prefix. Thus, we cannot use a topic like pipeline.intake.inserts. These configs will actually cause the connector to try and publish to a topic named inserts.pipeline.intake so make sure to name your topic correctly if you wish to use this connector. Otherwise testing should be done in the same way as before.

Lambda

This is where we start building our actual processing pipeline/graph. We need to create a new Lambda function and set an MSK based trigger for it.

Start by going to the Lambda dashboard and click on the “Create function” button on the top right. Make sure to choose “Author from scratch” and fill in a name for your function. We are going to use Go code so select “Go 1.x” from the “Runtime” list.

Lambda — Create a New Function

Next, expand the “Change default execution role” section, select “Create a new role from AWS policy templates” and give the role a name. This will create a new “service-role” to be used as the execution role for the Lambda function. Once created, we will need to tweak the permissions.

Expand the “Advanced settings” section and tick the “Enable VPC” box. We need our Lambda function to have access to the MSK cluster so the trigger can read from a topic and so we can publish to the next topic in line. Choose your VPC from the list, ALL the subnets where the MSK brokers are deployed and finally the security group as we defined previously:

Lambda — Function VPC Settings

Create the function by clicking on “Create function”.

Once the function is created, click on it and then go to the “Configuration” tab and select “Permissions” from the left menu:

Lambda — Execution Role

This shows you the execution role created for you, and you can browse the list of permissions it gives your Lambda function. Click on the role to open it in the IAM dashboard, remove the current policy and then create a new one that has the following permissions:

Note to replace the <REGION> with your selected region and <ACCOUNT_ID> with your AWS account ID.

Go back to the Lambda function’s configurations, click on the “Monitor” tab and then click on “View logs in CloudWatch”. This should take you to the log group that was created for you Lambda functions. If, for some reason, this group was not automatically created, you will get an error message like this one:

CloudWatch — Log Group Missing Error

You will need to manually create the log group. To do this, note the group’s required name in the error message, which in this case is /aws/lambda/medium-pipline-router. Now click on “Log groups” from the navigation bar under the error message or by expanding the left sidebar and clicking on “Log groups” there.

CloudWatch — Showing Log Groups

Now click on “Create log group”, fill in the required name and then click on the “Create” button:

CloudEWatch — Creating a New Log Group

VPC Revisited

Another thing we need to take care of at this point is making sure we have network connectivity to some required AWS services for our trigger. Depending on your VPC of choice, you may not have connectivity to the STS, Lambda and/or Secrets Manager services. We can fix this by adding VPC endpoints for each of these to our VPC. If you get an error message about this when setting up the MSK trigger for the Lambda function, follow these instructions:

Go to the VPC dashboard and select “Endpoints” from the left side menu. Then click on “Create Endpoint” on the top right. Fill in a descriptive name for your endpoint and then search and select the com.amazonaws.<REGION>.sts service from the “Services” list. Remember to replace <REGION>with your region.

Now select your VPC from the VPC list, select ALL the subnets where MSK brokers are deployed, and for each, Select the subnet ID from the combobox. Select the “IPv4” IP address type and then select the security group we set up for our VPC:

Creating VPC Endpoints

Leave the rest as is and click on “Create Endpoint” at the bottom. Repeat the process for the lambda and secretsmanager services as well.

Lambda Router

Now that you have a Lambda function, we can write code for it and configure a trigger. Here is an example Go code that will take care of everything our Lambda’s will do:

As can be seen, this code encapsulates the Lambda Go SDK functionality required to run the proper handler function when the Lambda function is activated. The handlers specifically expect a Kafka data structure (this is being deserialized by the AWS Lambda SDK for us) and we then take the JSON items contained within and convert to our own Go data structure. Finally, if specified using the KAFKATOPICand KAFKABROKERS environment variables, we also publish to a Kafka topic.

This code contains handlers for both a router and a worker, and chooses the correct one based on the NODE_ROLEenvironment variable.

Please note that while the Confluence MSK connector we used publishes the data to Kafka in JSON format, the Debezium connector uses the Rust Serde serialization instead. The resulting structure looks like this:

This means we will need to replace the JSON deserialization code with something like serde-go, which we will not cover in this article.

In order to build you code for Lambda, you need to specify some Go build parameters like this:

GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o medium-lambda-node

and then zip the resulting binary. If you which to easily reuse your code, it is recommended to upload the zip file to S3 at this point.

Now go to your Lambda function’s settings and select the “Code” tab. Click on “Upload from”, select “.zip file”, click “Upload” in the dialog that popped up and select your zip file. If you uploaded to S3, select “Amazon S3 location” instead then paste the S3 link URL to your zip file.

Next, under the “Runtime settings” section, click the “Edit” button and enter your compiled binary’s filename in the “Handler” box:

Lambda — Uploading Code to the function

Click on “Save” and give your lambda a few minutes to load the new code.

Lambda MSK Trigger

Next, we will set up the MSK trigger. This trigger will connect to the MSK cluster, listen on our initial topic, and then trigger the Lambda on new incoming messages.

You can either click on “+ Add Trigger” from the “Function overview” section, or go to the “Configuration” tab, select the “Triggers” section on the left and then click on “Add trigger” on the right.

In the new dialog, select the “MSK” trigger type. This will open the trigger’s settings. Select the MSK cluster, set the desired batch size, your “Batch Window” (the max polling time in seconds for the topic) and then fill in the topic name.

By default, the trigger will handle new incoming messages. If you want it to process ALL the messages in the topic, select “Trim horizon” from the “Starting position” box. This is the desired action for us as one of our requirements was that we do not want to lose messages so we can dynamically plug in components into our pipeline to start handling previously unknown message types.

In this article we are not taking care of Authentication. However, there is a fine point to note here. We created the MSK cluster with the default Authentication settings, which means that both “Unauthenticated” and “IAM role-based authentication” are enabled. The trigger will default to using “IAM role-based authentication” for which we previous added the “kafka-cluster” actions to our Lambda execution role’s policy. If these permissions are missing, you will get a “SASL authentication failed” error message from the trigger. We did NOT enable SASL authentication in our MSK cluster, so why are we getting this error? Well, this is because behind the scenes, AWS implements the “IAM role-based authentication” mechanism using SASL.

As we are not tackling authentication here, just go ahead and click the “Add” button to create the new trigger:

Lambda — Creating an MSK Trigger

Deploying the trigger can take a few minutes, and once deployed we can configure and test our router.

Under the “Configuration” section, select “Environment variables” on the left and then click the “Edit” button and add the NODE_ROLE variable with ROUTER as the value.

If we now add this new document to our DocumentDB, it will be published to the pipeline.intake.inserts topic by the MSK connector as we saw before. The trigger will then pick it up and trigger the Lambda. Our sample code will output the information read in the message to CloudWatch so if you go to the log group we created above, you will see a new log entry appearing and within something like this:

Lambda — Successful RunCloudWatch Output

In order to finalize the router’s work, we want to demonstrate that it can actually route messages to different topics based on their type. The Lambda code will use the color field as a type for the message and will route the message to a Kafka topic named pipeline.type.<COLOR>. So, create two new topics: pipeline.type.blue and pipeline.type.green.

Next, we should tell our Lambda where MSK is so it can actually publish the messages to the various topics so go back to the environment variables and add a variable called KAFKABROKERS and set its value to be a comma-separated list of the brokers in you MSK cluster. For example:

b-1.medium-msk.i78wus.c3.kafka.eu-west-2.amazonaws.com:9094,b-2.medium-msk.i78wus.c3.kafka.eu-west-2.amazonaws.com:9094,b-3.medium-msk.i78wus.c3.kafka.eu-west-2.amazonaws.com:9094

You may note that we are using port 9094 here as our nodes are connecting to MSK securely using TLS. This will tell our example code to publish the data to the proper queue in MSK in addition to printing to the log.

Now, if we add a new document with the same message as before, we will be able to use the console consumer as we did before to listen on the pipeline.type.blue topic and we will see the message appear under that topic with a few seconds from creation:

Kafka CLI Consumer

Our router picked up the new document, determined that the message type is blue and published the data to the pipeline.type.blue queue as desired.

Lambda Processor

The final piece of the puzzle is the processor node, which is just another Lambda function that uses the same code as before. Go ahead and replicate the process we used for the router Lambda to create a new Lambda with an MSK trigger listening on the pipeline.type.blue topic. This time, set the NODE_ROLE environment variable to WORKER so the worker handler function is used.

Finally, we can test the full E2E mechanism by adding a new document as before, and we will see a new log entry in CloudWatch for the new Lambda function with the content of the document.

If we create another document with green as the color, we will see the new message appearing under the pipeline.type.green topic, but no Lambda function will be activated as we do not have a trigger listening on this topic. The messages will accumulate in this topic until we create a new Lambda function capable of processing “green” messages and add a trigger for it that listens on the pipeline.type.green topic.

Our sample pipeline is now complete!

Conclusion

In this article we demonstrated how to configure various AWS components (VPC, EC2, DocumentDB, S3, IAM, CloudWatch, MSK and Lambda) and tie them together to create an E2E processing pipeline that triggers automatically when new documents are inserted into the DB. For this article, we chose the smallest available instance sizes for the various service so our EC2 instance and our DocumentDB are running for free while our Lambda functions are only billed when running (i.e., when triggered by a new document published into the proper topic until they finish processing it).

Our pipeline can be dynamically configured to accommodate any processing structure by creating Kafka topics as edges in our graph and Lambda functions as nodes with triggers for the specific Kafka topics. Restructuring can be done while the system is running without losing any data.

We also covered various CLI, shell and GUI tools for working with MongoDB and Kafka as well as remotely connecting to our resources via SSH port forwarding.

Hopefully, this document provides a good starting point for anyone interested in building fully managed cloud-based processing pipelines using AWS services.

Thanks

I wish to thank Julia Valenti for her amazing assistance in reviewing and editing this article ❤️

--

--

Tal Maoz
cisco-fpie

Innovation Software Engineering Leader | Cisco’s Future Product Innovation and Engineering (F-PIE)