Simple AWS SQS queue producer and consumer

Heshani Samarasekara
Geek Culture
Published in
5 min readJun 14, 2021

In this article I will be explaining how to create AWS SQS FIFO queue, produce messages to the queue and create a lambda function to consume from the queue using java SDK.

Usage of AWS SQS FIFO queues

FIFO queue ensures strictly ordered message delivery and exactly-once message processing. (It also provide at-least once message processing) SQS FIFO supports sending, receiving, and deleting messages, and retrying whenever there are send or receive failures. FIFO queue can be created by providing attribute “FifoQueue=true”. FIFO behavior is applied to messages that have the same MessageGroupId. That is if all the messaged have same MessageGroupId, those all will be ordered based on arrival. If messages have several MessageGroupIds then messages for each MessageGroupId will be separately ordered. Also it avoid introducing duplicate messages and we can define duplication strategy. If the attribute “ContentBasedDeduplication=true” then the messages are treated as unique based on the message content. Also it can give another MessageDeduplicationId, where messages with same MessageDeduplicationId are considered as duplicates. This is a basic high level description on AWS SQS FIFO queue. You can find more details from the official page.

Let’s move to the code.

Producer

Inside the producer two main things are done. Creating a queue if not exist and send a message to the queue. When creating a queue, we can give several attributes for the queue. One important thing is when creating a FIFO queue, we must give the suffix “.fifo” for the queue name. Another thing about the queue is, if there is a failure in consuming messages from the queue, we can move those failed messages into a separate queue, dead-letter queue. This redirection of messages can be provided in the redrive policy. Below are the attributes we can set when creating the queue.

Visibility timeout(VisibilityTimeout): Set the visibility timeout to the maximum time that it takes your application to process and delete a message from the queue. (If standard queue, when one consumer pick a message and fails, it will be available for other consumers within the visibility period. For FIFO : If a message must be received only once, your consumer must delete it within the duration of the visibility timeout)
Delivery delay(DelaySeconds): Any messages that you send to the queue remain invisible to consumers for the duration of the delay period.
Receive message wait time(ReceiveMessageWaitTimeSeconds): the maximum amount of time that polling will wait for messages to become available.
Message retention period(MessageRetentionPeriod): amount of time that Amazon SQS retains a message that does not get deleted.
Maximum message size(MaximumMessageSize): the maximum message size for this queue.
Maximum number of receives per message(maxReceiveCount): If the ReceiveCount for a message exceeds the maximum receive count for the queue, Amazon SQS moves the message to the associated DLQ.

Maven dependencies for producer
SQS producer class

Now we have created a producer, next step is to write a consumer to consume messages from the queue.

Consumer

The consumer is written as a lambda function.

Maven dependencies for consumer
Java class for consumer

After writing the code, we can package it as a jar using below command. This command will package the jar with all the relevant libraries.

mvn clean package shade:shade
  1. Create lambda function

You need to have an existing role or need to create a new role with SQS capabilities.

Required roles — AWSLambdaSQSPollerExecutionRole, AWSLambdaBasicExecutionRole

2. Upload the code to lambda function

Choose the jar type and upload the code.

3. Test the code

Change the default request handler. Code -> Runtime setting -> Edit

Paste the handler class full name and save.

Select AWS SQS template and then Test.

See the test output for any errors and log output. This will log the message received to the CloudWatch and we can go to CloudWatch and check the logs.

4. Add trigger to SQS queue

Now when you send a message to the source queue, it will be consumed by the lambda function and create a log entry in cloud watch. With these steps we can create a queue producer and consumer using AWS SQS and lambda.

One important thing to look in here is, what will happen if the consumer failed to process the message? How will that be handled? This scenario can be handled by the concept called dead-letter queue. Dead-Letter queue is another queue same as the queue we have already created, and it can be used to store the failing messages after defined times of retries. For a FIFO queue, the dead-letter queue also should be a FIFO one. The DLQ creation is same as previous source queue creation. Once we have created the DLQ, we need to use this DLQ in our source queue, or else we need to define a policy to send the failing messages to the DLQ. The below code snippet shows how we can define the policy to attach DLQ for a given source queue.

In this code, the policy has an attribute MAX_RECEIVE_COUNT. What it means is once the message is failed to consume, it will be putting back into the source queue. Whenever the message is coming to the source queue, the count is increased. When the receive count exceed the max receive count, the message will be moved to the DLQ and will be kept in that queue until the retention period. Later on we can go and check the DLQ and analyze the reasons for failing messages.

This article explained how to create a SQS FIFO queue, how to write lambda function to consume messages from the queue and how to handle the failing messages. Hope you could learn new things 😊

--

--