Developing and Testing AWS Applications Locally, Made Easy

Sean Scofield
Ancestry Product & Technology
6 min readMar 4, 2021

Overview

The physicist Richard Feynman believed that if he couldn’t adequately explain a profound idea to a group of freshman students, then he hadn’t really mastered the idea himself. In a similar vein, I believe that if software developers can’t easily show an intern how to run their applications locally, then they haven’t really mastered those applications themselves. Ideally, we should strive to be able to run any application on one’s computer with a single command. Why?? Because doing so makes everyone’s lives easier (and frees up our time to focus on getting more tendies 🐓🍗).

Let’s take the example of an AWS-based application whose sole purpose is to convert images from PNG format to JPEG format. More specifically, this app will continuously poll an Amazon SQS queue whose messages contain the path to a PNG file stored in Amazon S3. Upon receiving a message from the queue, the app will proceed to download that PNG file, convert it, and then upload the resulting JPEG file to a different location in S3:

At Ancestry, we actually have a lot of applications like this, although instead of converting png files to jpeg format, they tend to process genomic files through algorithms to generate things like genetic ethnicity estimates. But in the spirit of keeping things simple here, let’s use this image conversion example to think about how we might develop an app that can run in both AWS and locally on one’s computer.

Writing the Code

To start off, let’s write some code to continuously poll an SQS queue and “process” each message that gets received (while the diagram above assumes that the act of uploading a file to S3 will automatically trigger an SQS message, let’s assume for the sake of simplicity that the user will also manually send an SQS message for us, at least for now). While any relevant programming language should work just fine, we’ll roll with python for this example:

If this ^ python code were to be executed from within AWS, and the SQS queue and S3 bucket were resources that our AWS credentials had access to, this code would work just fine. However, by mocking AWS services such as SQS and S3, we can make it easy to run this code directly on one’s computer as well. One publicly available tool that my team at Ancestry uses to help us do this is called Localstack, which can be run as a docker container to essentially spin up a miniature version of AWS on one’s computer (accessible via the url http://localhost:4566). How does one go about leveraging Localstack?

Running Localstack (a “miniature AWS”)

Assuming you have Docker installed, one way to get Localstack up and running is by using Docker-compose. While the Localstack source repo actually comes with its own docker-compose file, we’re going to create a custom one that will automatically create our SQS queue for us whenever it boots up. More specifically, we’ll write a shell script that can make an API call to create an SQS queue, and then we’ll write a docker-compose file to tell Localstack to run this shell script on start-up:

resources/localstack-setup.sh:

# Create SQS queue
awslocal sqs create-queue --queue-name image_converter
# Create S3 bucket
awslocal s3 mb s3://images

docker-compose.yml:

version: "3.7"services:
localstack:
image: "localstack/localstack:0.12.1"
ports:
- "4566:4566"
environment:
SERVICES: "s3,sqs"
volumes:
- ./resources/localstack-setup.sh:/docker-entrypoint-initaws.d/localstack-setup.sh

(Note the use of the package awslocal being used in localstack-setup.sh, which is essentially a wrapper for AWS’s awscli package that knows to talk to http://localhost:4566 rather than the actual AWS website. awslocal comes pre-installed on the Localstack docker image 🙂).

Now that we’ve got these files in place, all we need to do in order to spin up Localstack and our SQS queue is run “docker-compose up” from the command line (note that it will take a few seconds for everything to be fully spun up).

At this point, we have everything we need in order to be able to run and actively test our image converter application locally. We can easily spin up Localstack whenever we need to, and could proceed to watch our python code execute successfully as long as we set the environment variables that the code will be looking for (AWS_ENDPOINT_URL=http://localhost:4566 and SQS_QUEUE_URL=http://localhost:4566/000000000000/image_converter). In order to truly test it, of course, we would also need to upload a png file to our Localstack S3 bucket, and send a message to our Localstack SQS queue specifying the location of that png file (refer to this project’s README for instructions on how to do that)

While running our application directly in python is great, if we ever wish to run it as a Docker container rather than as a simple python program, we can (and probably should) make it even easier to get everything up and running locally.

One Command* to Rule Them All

If we pretend that this image conversion application is intended be hosted on a service like AWS Fargate or AWS EC2, it’s reasonable to think we’d want to package it into a docker image. I won’t bore you with the details about how to write a Dockerfile and integrate it into our Docker Compose stuff, but will instead refer you back to my github repo if you’re interested in that. Ultimately, with the help of that Dockerfile, we can easily package our application into a new docker image with a simple Docker build command, as well as spin up a docker container from that image with a Docker run/deploy command. In the spirit of making things as easy as possible for our future selves, let’s put these commands in a simple Makefile:

With this Makefile in place, we can now simply type “Make build” in the command line to package our application into a docker image, “Make deploy” to spin up Localstack (w/our SQS queue) and our application, and “Make clean” when we’re ready to tear everything down!

(*Note that users running “Make deploy” for the first time might first need to run the command “docker swarm init”)

Conclusion

With this example, we’ve seen one easy way to run an AWS-based application directly on one’s computer. While there are certainly additional improvements we could make here, we were ultimately able to leverage Localstack to help us spin up a “miniature AWS”, and have set up our image conversion application files in a way that any other developer working on this project could build the application with a simple “Make build” command, and run it with a “Make deploy” command. That’s easy enough that my grandparents could do it!

It’s worth noting that each individual application will have its own challenges with regards to making local development possible (it will be harder for some than for others). But, to the degree that they can be: every application should be made as easy as possible to run and develop locally. That includes having a good README (yes you, linking your resume with github projects that lack good READMEs 😜).

Thanks for reading!! For anyone interested in playing with the image conversion application from this walk-through, feel free to visit its github repo, which has a few tiny improvements to the files presented here.

Yes. Yes it was 😉

--

--