LocalStack — The best way to setup

Mor Dvash
Israeli Tech Radar
Published in
6 min readJun 11, 2023

Over the past few months, I had the valuable experience of setting up a LocalStack environment from the ground up. During this process, my teammates and I recognized the importance of establishing a streamlined approach. Our shared vision was to simplify the deployment of the local environment for both current team members within our organization and future software engineers who would join our team. We were well aware of the challenges associated with onboarding to a new company or transitioning to a different team internally.

Configuring the environment under such circumstances often proved to be a significant time-consuming task, taking up to a week in the best-case scenario. Thus, our goal was to minimize this complexity and reduce the onboarding time, ensuring a smoother and more efficient experience for everyone involved.

One area of complexity that we aimed to address was the cognitive load imposed on software engineers.

Cognitive load refers to the amount of information and knowledge required to complete a task.

By reducing the cognitive load, we intended to alleviate the burden on software engineers, allowing them to spend less time and effort on learning necessary information. This reduction in cognitive load would not only save time but also decrease the risk of introducing bugs or errors due to overlooking critical details. By simplifying the setup and onboarding process, we aimed to create an environment where software engineers could focus more on their core tasks and responsibilities, ultimately enhancing productivity and reducing the likelihood of mistakes.

In our specific scenario, we encountered several components that required attention during the LocalStack deployment:

  • Creation of DynamoDB tables
  • Setting up SQS queues
  • Configuring the EventBridge
  • Creating an S3 bucket

It was crucial to address these aspects because when initially pulling the LocalStack image, the environment remained empty, lacking these essential services. Therefore, as part of our deployment process, we focused on establishing these components to ensure that the LocalStack environment was fully functional and capable of emulating the necessary AWS services. By proactively addressing these elements, we ensured a comprehensive and robust local development and testing environment for our team.

Photo by Alvaro Reyes on Unsplash

There were two ways we came up with:

  1. Make file - that contains all the necessary commands.
  2. Dockerfile - that would execute an SH file for each service.

Make file

One of the initial approaches we adopted was creating a Make file that included commands for creating DynamoDB tables, setting up SQS queues, configuring EventBridge, and more. While this approach offered the convenience of having all the commands in one place, there were a couple of drawbacks to consider. Developers needed to be aware of the existence of the Make file and remember to execute its commands, adding to their cognitive load. Additionally, as our services grew in number, the Make file became larger and more complex, making it harder to navigate and maintain. Despite these challenges, the benefit of this approach was that developers didn’t need to remember the exact AWS CLI commands to run since they were all encapsulated within the Make file.

Dockerfile

Ultimately, we opted for the second approach, which involved creating a Dockerfile that would execute an SH file for each service, encapsulating the logic required to set up those services. While this approach may initially present some complexity in terms of adding new services and understanding the overall decision, once grasped, it becomes as straightforward as opening a bottle of water. The significant advantage of this approach is that with a single command, we can deploy all our LocalStack services locally without requiring any additional actions from the developer. From the developer’s perspective, it functions as a black box that works like magic, providing a seamless and effortless experience.

Here is a little example of how we structure our code -

- localstack
- dynamodb
- dynamodb_create_file_metadata_table.sh
- eventbridge
- eventbridge_create_bus.sh
- s3
- s3_create_bucket.sh
- sqs
- sqs_create_queue.sh
- utils
- list-ls-created-resources.sh
- localstack_setup.Dockerfile
- localstack_setup.sh

The folder structure exemplifies a well-organized approach for leveraging the LocalStack tool in local development and testing of AWS services. The top-level “localstack” directory serves as the main directory, housing various subfolders dedicated to different AWS services like:

  • DynamoDB
  • EventBridge
  • S3
  • SQS

Each of these subfolders contains service-specific scripts responsible for creating and configuring the corresponding AWS service. For instance, the “dynamodb” folder includes the “dynamodb_create_file_metadata_table.sh” script, while the “eventbridge” folder contains the “eventbridge_create_bus.sh” script. This structure promotes modularity and simplifies the management of resources specific to each service. Additionally, the inclusion of a “utils” folder provides a central location for utility scripts, allowing convenient access to commonly used functionalities across services. This well-defined folder structure enhances organization, clarity, and ease of maintenance within the LocalStack environment.
Let’s take one of the SH files, for instance, the “dynamodb_create_file_metadata_table.sh” file.

#!/usr/bin/env bash

table_name='typed_various_data_table'

echo "Creating dynamodb table $table_name, this will take few seconds ..."

existing_table=$(aws --endpoint-url=$LOCALSTACK_URL dynamodb list-tables | grep $table_name | sed 's/ //g')

if [ -z "$existing_table" ]; then
aws --endpoint-url=$LOCALSTACK_URL dynamodb create-table \
--table-name $table_name \
--billing-mode PAY_PER_REQUEST \
--key-schema AttributeName=session_id,KeyType=HASH AttributeName=_id,KeyType=RANGE \
--attribute-definitions AttributeName=session_id,AttributeType=S \
AttributeName=_id,AttributeType=S
else
echo Table $table_name already exists.
fi

echo Done creating dynamodb table $table_name

As evident in the “dynamodb_create_file_metadata_table.sh” script, the approach is straightforward and relies on the AWS CLI to create a table in DynamoDB. Similarly, the other SH files follow the same pattern, containing AWS CLI commands for creating the required services. This approach ensures simplicity and consistency across the scripts. By leveraging the AWS CLI, we can easily interact with the AWS services and perform the necessary setup tasks. Each SH file encapsulates the specific commands and configurations required to create the desired services, making it convenient to deploy and manage the required infrastructure for local development and testing.

In addition to the folder structure, the presence of “localstack_setup.Dockerfile” and “localstack_setup.sh” files further reinforces a streamlined approach to setting up and configuring the LocalStack environment. The “localstack_setup.sh” file is a shell script that contains a sequence of commands responsible for the setup and configuration of LocalStack. Each line executes a script located in a specific folder, where the implementation details of that particular service reside. This design allows for a modular and organized approach, where the specifics of each service’s setup and configuration are encapsulated within their respective folders. By utilizing the “localstack_setup.sh” script, the LocalStack environment can be effortlessly initialized, ensuring a smooth and efficient experience for developers.

#!/usr/bin/env bash

echo LOCALSTACK_URL=$LOCALSTACK_URL

s3/s3_create_bucket.sh

sqs/sqs_create_queue.sh

eventbridge/eventbridge_create_bus.sh

dynamodb/dynamodb_create_file_metadata_table.sh

The “localstack_setup.Dockerfile” file outlines the essential instructions for building a Docker image tailored for LocalStack setup. It includes the installation of necessary dependencies and tools, such as Python, Curl, and the AWS CLI. By incorporating these components, the Docker image becomes equipped to handle LocalStack requirements effectively. Additionally, the Dockerfile copies the local files into the image, ensuring that all necessary resources are available within the container. Lastly, it executes the “localstack_setup.sh” script.

FROM python:3.9-buster

RUN apt-get update -y && \
apt-get install -y curl bash libffi-dev make zip jq
RUN pip install -U pip wheel
RUN pip install awscli awscli-local boto3

ADD / /

RUN ["chmod", "+x", "./localstack_setup.sh"]

Lastly, we leveraged docker-compose to manage the dependencies and network between LocalStack and the LocalStack setup. By utilizing docker-compose, we were able to define and orchestrate the services required for our LocalStack environment in a declarative manner. This approach facilitated the seamless integration of LocalStack with the necessary dependencies, ensuring smooth communication and interaction between the components. docker-compose allowed us to configure the network settings, dependencies, and service definitions in a centralized configuration file, simplifying the management and deployment of our LocalStack environment.

version: "3"
services:

localstack:
image: localstack/localstack:latest
container_name: localstack
ports:
- "4566-4582:4566-4582"
- "8055:8080"
env_file:
- .env
environment:
- SERVICES=s3,sqs,eventbridge,dynamodb
- DATA_DIR=/tmp/localstack/data
- DEBUG=1
- DOCKER_HOST=unix:///var/run/docker.sock
user: "${UID}:${GID}"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- ./samples:/tmp/localstack/data
tmpfs:
- /tmp/localstack:exec,mode=600
networks:
- localstack_network

localstack_setup:
build:
context: ./localstack
dockerfile: localstack_setup.Dockerfile
command: ./localstack_setup.sh
env_file:
- .env
depends_on:
- localstack
environment:
- LOCALSTACK_URL=$LOCALSTACK_URL
volumes
- ./samples:/tmp/localstack/data
networks
- localstack_network

Conclusion

By executing a single command, we are able to construct a comprehensive AWS environment that significantly simplifies maintenance and enhances collaboration. This environment is designed to provide a logical hierarchy and streamlined management of various AWS services within a local development setup. With this approach, developers can easily create and configure DynamoDB tables, SQS queues, EventBridge configurations, and other essential AWS components. By consolidating these services into a cohesive local development environment, it becomes easier to manage and collaborate on AWS-related tasks. This streamlined setup promotes efficiency, accelerates development workflows, and fosters smoother collaboration among team members.

--

--

Mor Dvash
Israeli Tech Radar

I'm a Backend Engineer , who loves to learn new things every day and evolve..