How to use Micronaut in AWS Batch

Vladimír Oraný
Stories by Agorapulse
6 min readOct 13, 2021

AWS Batch is designed to run independent tasks called jobs. Usually, the jobs are scheduled but they can also react to a vast number of events supported by AWS EventBridge, including SNS, SQS and Kinesis. AWS Batch fills the gap between always-on AWS Elastic Beanstalk servers and AWS Lambda which can also react to various events but can only span up to 15 minutes. AWS Batch gives you more control over the computing environment and also the priorities of the jobs. AWS Batch jobs are defined by Docker containers which are run when a particular event occurs.

Creating the AWS Batch-ready Micronaut Application

As AWS Batch executes jobs by running Docker containers, Micronaut Command Line Application applications are best suited for this purpose. You can use the following steps to generate the command line application using Micronaut Launch.

As every invocation will be a “cold start” then we should also add GraalVM capabilities to the application.

There is a great option to share the application to GitHub with a single click.

You can also download the application package or use the Micornaut CLI to customize the generated application:

mn create-cli-app --build=gradle --jdk=11 --lang=java --test=junit --features=graalvm,github-workflow-graal-docker-registry com.agorapulse.micronaut-aws-batch-demo

The generated application contains a guide on how to set up the publishing to the Amazon Elastic Container Registry. You will need your access key id and your secret access key alongside the name of the repository to push. Set these values as repository secrets.

You also need to create a repository for the project in Amazon ECR:

Next time you push to the repository then the GraalVM-based application image should be pushed to Amazon ECR. You can try it with two simple changes in the generated application.

First, update the command class to show that you can inject any bean and that you can simply parse any command-line arguments:

You will also need to update the test:

Then update the build.gradle file to fix the issue with libstdc by using distroless base image. Add the following configuration to the end of the file:

dockerfileNative {
baseImage('gcr.io/distroless/cc-debian10')
}

When you push the changes to GitHub then the container should be pushed to Amazon ECR from the GitHub workflow.

You can verify it in the Amazon ECR:

Once the container is present in ECR then we can focus on setting up AWS Batch.

Creating the Job Definition in AWS Batch

Setting up AWS Batch requires some work that will not be covered in this guide. Please, follow to official guide:

Once you have your Compute environments and Job queue ready then you can proceed to create a new Job definition:

Let's call the Job definition with the same name as the application:

Then clear the command arguments and point the job to the image we had published from in the previous section:

The rest of the settings can remain unchanged:

Now we have our Job description ready.

Triggering the Job

The job is usually triggered by AWS EventBridge Events (a preferred replacement of CloudWatch Events). Let's create a periodic event that will trigger the job by creating a new rule:

Start with the name of the rule and optional description:

Then define the event pattern or schedule. Select Schedule for the periodic trigger. You can display the event sample.

Next, select the Batch job queue as a target with the ARN of your queue and ARN of the Job definition created in a previous section. Keep the job name same as the name of the rule.

For demo purposes, let's also use the payload of the event for the job execution. We can do it by expanding the Configure input section and selecting the Input transformer.

The first text area declares the parameter by extracting data from the payload and the second one uses the object with ContainerOverrides to redefine the job's command parameters. See more details here.

The rest of the settings can remain unchanged.

Once the even rule is created and triggered then we can see the execution in the AWS Batch Jobs view.

You can check the details of the execution:

There is a link for the logs in CloudWatch. There you can see if everything works as expected:

The timestamp has been passed from the scheduled event and the injection happened as well so the log contains the line similar to this one:

Event sent at 2021-10-12T13:15:34Z to the environments [ec2, cloud, cli]

This is the end of this guide. You have an up-and-running Micronaut application in AWS Batch. You can check the sources of the sample application on GitHub:

--

--