Advanced Serverless Techniques V: DynamoDB Streams vs. SQS/SNS to Lambda

The SaaS Enthusiast
9 min readFeb 27, 2024

--

Using DynamoDB Streams to trigger AWS Lambda functions directly versus using an intermediary service like Amazon Simple Queue Service (SQS) or Amazon Simple Notification Service (SNS) before invoking Lambda functions can depend on various factors, including the complexity of your application, the need for reliability, the processing time of your Lambda functions, and how you manage failure cases. Let’s explore the advantages and scenarios where one might be preferred over the other.

Direct DynamoDB Streams to AWS Lambda

Advantages:

  1. Simplicity: Direct integration provides a straightforward approach to process stream records immediately with Lambda without needing to manage additional services like SQS or SNS.
  2. Real-Time Processing: It allows for near real-time processing of changes in your DynamoDB table, which is ideal for applications requiring immediate action upon data changes.
  3. Cost-Effectiveness: Fewer components in your architecture can lead to lower costs, as you don’t incur costs for additional message transmission between services.

Use Cases:

  1. Real-Time Analytics: For applications that need to analyze data changes in real-time, such as calculating metrics or aggregating data shortly after it’s written to a DynamoDB table.
  2. Event-Driven Applications: Where an update in a DynamoDB table triggers immediate downstream actions, such as sending notifications, updating related data in other databases, or interacting with external APIs.
  3. Data Synchronization: Automatically synchronizing changes in a DynamoDB table with another data store, ensuring data consistency across platforms without significant delays.

DynamoDB Streams to SQS/SNS to AWS Lambda

Advantages:

  1. Reliability and Error Handling: SQS offers dead-letter queues (DLQs) to handle message processing failures, allowing for better error management and the ability to reprocess failed messages.
  2. Message Filtering and Routing: SNS allows for message filtering and routing to multiple subscribers, enabling more complex processing workflows and the distribution of specific types of messages to different processing systems.
  3. Scalability and Buffering: SQS can act as a buffer for incoming stream records, which can help manage and scale processing during spikes, ensuring that messages are not lost when Lambda functions are throttled.

Use Cases:

  1. Complex Workflows with Conditional Processing: Applications that require messages to be processed differently based on their content can benefit from SNS’s message filtering capabilities, routing messages to different Lambda functions as needed.
  2. Highly Reliable Data Processing Pipelines: For critical applications where losing data is not an option, using SQS with DLQs ensures that messages are re-queued for processing in case of failures, enhancing the reliability of the data processing pipeline.
  3. Decoupled Microservices Architecture: In a microservices architecture, using SNS or SQS as a mediator allows for decoupling services, enabling independent scaling, updating, and failure isolation of components. This setup is beneficial for applications where Lambda functions process data for multiple downstream services or when there’s a need to integrate with external systems that might require different processing times or have varying reliability requirements.

Choosing between these two approaches depends largely on your specific application needs, including how critical the data is, the complexity of the processing workflow, and your tolerance for message processing delays. Direct integration is typically simpler and more cost-effective for straightforward, real-time processing needs, while using SQS/SNS offers greater flexibility, reliability, and control over message processing, especially for complex, high-volume, or mission-critical applications.

SQS

Using the Serverless Framework, you can define AWS Simple Queue Service (SQS) resources and integrate them with AWS Lambda in your serverless.yml configuration file. This process allows you to create serverless applications that can process messages from SQS queues. AWS SQS offers two types of queues: Standard Queues and FIFO (First-In-First-Out) Queues, each serving different use cases depending on your application's requirements.

Types of SQS Queues

  1. Standard Queues: These are the default type of SQS queues. They offer maximum throughput, best-effort ordering, and at-least-once delivery. Standard queues support a high number of transactions per second per action. However, occasionally, messages might be delivered in an order different from which they were sent.
  2. FIFO Queues: FIFO queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can’t be tolerated. FIFO queues ensure that messages are processed exactly once, in the exact order that they are sent.

Defining SQS in serverless.yml

To define an SQS queue in your serverless.yml and set up a Lambda function triggered by this queue, you would follow these steps:

  1. Define the SQS Queue Resource: First, you need to define the SQS queue as a resource in your serverless.yml. This involves specifying the type of queue (Standard or FIFO) and any related properties.
  2. Set Up Lambda Trigger: After defining the queue, you configure your Lambda function to be triggered by events from this queue.

Here’s an example serverless.yml snippet that demonstrates how to set up both a standard and a FIFO queue, and configure a Lambda function to be triggered by the standard queue:

service: sqs-service-example

provider:
name: aws
runtime: nodejs18.x
region: us-east-1

functions:
processQueueMessages:
handler: handler.process
events:
- sqs:
arn:
Fn::GetAtt:
- MyStandardQueue
- Arn

resources:
Resources:
MyStandardQueue:
Type: "AWS::SQS::Queue"
Properties:
QueueName: "MyStandardQueue"
MyFIFOQueue:
Type: "AWS::SQS::Queue"
Properties:
QueueName: "MyFIFOQueue.fifo"
FifoQueue: true
ContentBasedDeduplication: true

In this example:

  • MyStandardQueue is a standard SQS queue.
  • MyFIFOQueue is a FIFO queue, denoted by FifoQueue: true and ContentBasedDeduplication: true, which enables automatic message deduplication.
  • The processQueueMessages Lambda function is configured to trigger from messages on MyStandardQueue.

Important Points

  • For FIFO queues, ensure the queue name ends with .fifo and set FifoQueue to true.
  • You can customize the queue properties under Properties to meet your requirements, such as setting message retention periods or visibility timeouts.
  • The Lambda function’s permission to access the queue is implicitly handled by the Serverless Framework when you define the event source as shown above.

By adjusting the serverless.yml configuration, you can effectively manage your application's components and interactions, leveraging SQS queues for decoupled, scalable, and reliable message processing.

Expanded serverless.yml with Lambda Function

Below, I expand the serverless.yml example to include a Lambda function that is triggered by an SQS queue. Following that, I'll provide a simple Node.js Lambda function example using the AWS SDK for JavaScript v3, which processes messages from the queue.

This configuration sets up a Lambda function named processQueueMessages, which is triggered by an SQS queue named MyStandardQueue.

service: sqs-service-example

provider:
name: aws
runtime: nodejs14.x
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource:
- Fn::GetAtt:
- MyStandardQueue
- Arn

functions:
processQueueMessages:
handler: handler.process
events:
- sqs:
arn:
Fn::GetAtt:
- MyStandardQueue
- Arn

resources:
Resources:
MyStandardQueue:
Type: "AWS::SQS::Queue"
Properties:
QueueName: "MyStandardQueue"

Node.js Lambda Function Example

This Node.js example demonstrates how to process messages from an SQS queue using the AWS SDK for JavaScript v3. It assumes you have the AWS SDK v3 installed in your project. If not, you can add it by running npm install @aws-sdk/client-sqs.

handler.js:

const { SQSClient, DeleteMessageCommand } = require("@aws-sdk/client-sqs");

// Initialize the SQS client
const sqsClient = new SQSClient({ region: "us-east-1" });

exports.process = async (event) => {
console.log("Event: ", JSON.stringify(event, null, 2));

for (const record of event.Records) {
const { body } = record;
console.log("Processing message: ", body);

// Process the message
// Add your message processing logic here

// Optionally, delete the message from the queue if successfully processed
await deleteMessage(record.receiptHandle);
}
};

async function deleteMessage(receiptHandle) {
const deleteParams = {
QueueUrl: "YourQueueURL", // Replace with your queue URL
ReceiptHandle: receiptHandle,
};

try {
const data = await sqsClient.send(new DeleteMessageCommand(deleteParams));
console.log("Message deleted", data);
} catch (err) {
console.error("Error", err.stack);
}
}

Points to Note:

  • The Lambda function process is triggered by messages from the SQS queue specified in serverless.yml.
  • The function logs the event and processes each record individually.
  • After processing a message, it’s a good practice to delete it from the queue to prevent reprocessing. However, this deletion step depends on your application logic. In the example, replace "YourQueueURL" with the actual URL of your SQS queue.
  • The AWS SDK for JavaScript v3 uses a modular approach, so you import only the SQS client and the commands you need, which can reduce the deployment package size.

Remember to adjust the queue URL in the Lambda function code and ensure your Lambda function has the necessary permissions to interact with SQS queues, as defined in the serverless.yml IAM role statements.

Dead Letter Queue

Reading and processing messages from a Dead Letter Queue (DLQ) in AWS SQS involves setting up a mechanism to poll messages from the DLQ and then handling them accordingly. This could mean logging the message for audit purposes, alerting administrators, or attempting to reprocess the message after fixing the issues that caused it to end up in the DLQ in the first place.

Setting up a Dead Letter Queue

First, ensure that your primary SQS queue (the source queue) is configured with a DLQ. This setup involves specifying a DLQ and defining the conditions under which messages are moved to the DLQ, such as the maximum number of receives.

Reading from the DLQ

To process messages from a DLQ, you can use a similar approach as processing messages from any SQS queue. You can create a Lambda function triggered by the DLQ or poll the DLQ using an application or script. Below is an example of how you might set up a Lambda function to be triggered by messages in a DLQ using the Serverless Framework, followed by example code for processing those messages.

serverless.yml for DLQ Trigger

functions:
processDLQMessages:
handler: dlqHandler.process
events:
- sqs:
arn: arn:aws:sqs:us-east-1:123456789012:MyDeadLetterQueue

Example DLQ Handler (dlqHandler.js)

const { SQSClient, DeleteMessageCommand } = require("@aws-sdk/client-sqs");

const sqsClient = new SQSClient({ region: "us-east-1" });

exports.process = async (event) => {
console.log("DLQ Event: ", JSON.stringify(event, null, 2));

for (const record of event.Records) {
const { body } = record;
console.log("Processing DLQ message: ", body);

// Here, you might log the message, send an alert, or attempt reprocessing
// For example, log the message to CloudWatch
console.error("Failed message: ", body);

// Optionally, delete the message from the DLQ if you have successfully handled it
await deleteMessage(record.receiptHandle, "YourDLQURL");
}
};

async function deleteMessage(receiptHandle, queueUrl) {
const deleteParams = {
QueueUrl: queueUrl, // Replace with your DLQ URL
ReceiptHandle: receiptHandle,
};

try {
const data = await sqsClient.send(new DeleteMessageCommand(deleteParams));
console.log("DLQ Message deleted", data);
} catch (err) {
console.error("Error deleting DLQ message", err.stack);
}
}

Handling DLQ Messages

When handling messages from a DLQ, consider the following strategies:

  1. Analysis and Logging: Analyze why the message was not processed successfully. Log details or send notifications to administrators for further investigation.
  2. Correction and Reprocessing: If the issue that caused the message to fail is identifiable and fixable, you may want to correct the underlying problem and then reprocess the message. This might involve sending the message back to the original queue or processing it directly.
  3. Alerting: Set up alerts to notify administrators when messages are sent to the DLQ, using Amazon CloudWatch Alarms or another monitoring service.

Processing messages from a DLQ is crucial for maintaining the health and reliability of your application, ensuring that no important messages are lost or ignored.

Empower Your Tech Journey:

Explore a wealth of knowledge designed to elevate your tech projects and understanding. From safeguarding your applications to mastering serverless architecture, discover articles that resonate with your ambition.

New Projects or Consultancy

For new project collaborations or bespoke consultancy services, reach out directly and let’s transform your ideas into reality. Ready to take your project to the next level?

Protecting Routes

Advanced Serverless Techniques

Mastering Serverless Series

--

--