Seamless Integration: Building an Event-Driven Architecture with AWS

Filipe Pacheco
5 min readMar 8, 2024

--

Hello Medium Readers,

I’m thrilled to share a significant milestone in my career journey. After five months of dedicated study, I’ve successfully completed my upskilling in DevOps. While there’s always more to learn, I believe it’s crucial at this juncture to strike a balance between broadening my knowledge and deepening my expertise in specific areas.

In my next post, I’ll be diving back into the realm of Machine Learning, armed with new insights and experiences gained from my adventures in AWS and ML, particularly with SageMaker.

But before we embark on that journey, let’s delve into my latest deployment in AWS. Today, I present a compelling use case involving one of AWS’s most versatile and frequently utilized services: AWS Lambda Function.

Task of the day

In today’s task, I continue to follow the same methodology as in the last post, making enhancements to the architecture of the HumanGov project. This time, the focus is on delving deeper into the world of serverless architectures and event-driven microservices using AWS technologies. The primary objective was to develop “HumanGov,” a Python-based serverless microservice triggered by data modifications in a DynamoDB table and executed by AWS Lambda functions.

As for news, I’ve utilized AWS Lambda, another serverless service from AWS that offers users a wide range of flexibility and use cases to enhance nearly any task. For this particular task, I’ve utilized boto3, a Python package that allows interaction with AWS through Python. You might be wondering why I chose Python. Well, when using Lambda functions, it’s essential to select the programming language you want to use, and I’ve opted for Python.

Services used in this implementation.

The proposed solution architecture remains largely unchanged from before, with the addition of a Lambda Function triggered by an event, specifically an event-driven approach. The concept is straightforward: when an item in DynamoDB is deleted, the Lambda function will be triggered to delete the corresponding file from the S3 Bucket. All other services remain consistent with those discussed in my previous posts.

Solution Architecture proposed.

Implementation

DynamoDB

My initial step in creating an event-driven microservice was to configure the DynamoDB table to include a streaming configuration. To accomplish this, simply navigate to your desired table, select the “Exports and Streams” section, and click on “Turn On” in the “DynamoDB Stream Details,” as illustrated in the image below.

Activating DynamoDB stream details.

When this option is enabled, DynamoDB tables generate logs of changes that occur within the table. In the image below, you can observe the types of logs that are generated. I selected the last one, which enables me to view the state of the item before and after the change.

DynamoDB stream details configuration.

Lambda Function

The next step is to create the Lambda Function that will be triggered by changes to the item inside the previous DynamoDB table. I navigate through the console to the Lambda service, and on the creation screen, as shown below, I select the appropriate options. Pay attention to the “Runtime” section, as this is where I select the programming language in which I want to write my code.

Lambda function creation view.

The next step is to configure the Lambda’s code. The code isn’t a big deal here, as I’m using boto3, a Python package, to determine the class of the event. If this event is a ‘REMOVE’, then the code will proceed to delete the file in the S3 Bucket. Don’t forget to “Deploy” the changes once you have completed them.

Lambda function code configuration.

After configuring the code, I proceed to the Permissions section, as shown in the image below. Here, I click on the Role name, which opens a new page in IAM. From there, I can add the necessary policies to allow the Lambda function to connect to DynamoDB, S3, and CloudWatch.

Lambda function Permission screen configuration.

Returning to the Lambda page, I navigated to the Triggers section and clicked to add a new trigger, selecting the option shown in the image below. Since I had previously configured the DynamoDB table to enable stream details, this table appeared as an option to be selected here. If the options are unavailable, revisit that step and double-check the configuration.

Lambda function add trigger configuration view.

Once your configuration is complete, you will be redirected to the home page of your Lambda. Here, you should see an image like the one below, ensuring that all your configurations made so far are correct. The next and final configuration step is to create a CloudWatch Log group to host the logs.

Lambda configuration view after the trigger setup.

CloudWatch

In the console, I navigated to CloudWatch and opened the Log groups section, then selected to create a new one. It’s crucial to maintain the standard in the name, as shown below. After this, you simply need to click “create,” and then the infrastructure is ready to be tested.

CloudWatch Log group creation.

After interacting with the HumanGov application, I revisited the Log group that I had just created and checked for the logs. As you can see, the logs from the Lambda function are registered here, and at the end, you can confirm that the file was successfully deleted.

Proof of deletion in CloudWatch Log group.

Conclusion

In this task, I demonstrated the implementation of an event-driven microservice using AWS services such as DynamoDB, Lambda, and CloudWatch. By configuring DynamoDB to stream changes, creating a Lambda function triggered by these events, and setting up CloudWatch Logs to monitor activity, I showcased the seamless integration of these components.

This approach enables efficient automation, exemplified by the successful deletion of files in an S3 Bucket triggered by data modifications in DynamoDB.

I hope you liked to read about my journey, a Data Scientist, take some adventures into the DevOps realm. If you want to know more in the future, feel free to follow me for the more episodes :)

--

--

Filipe Pacheco

Senior Data Scientist | AI, ML & LLM Developer | MLOps | Databricks & AWS Practitioner