Docker & AWS Mocking

Bo Bracquez
3 min readOct 30, 2019

--

As described in a previous post I was looking for a way on how to mock AWS services and we found a candidate, LocalStack!

In this post I will be going trough the process of setting up LocalStack with Docker and do some smoke tests. Before starting this article I did not realise that Docker is not supported on Windows 10 Home anymore so I had to find a workaround, I tried using the old Docker Toolbox for this setup.

Getting started

Attempt #1

LocalStack already seems to have a docker-compose file in their repository, this will surely ease the process.

Even though our docker-compose version from the Docker Toolbox did not meet the minimal requirement of version 2.1 I was able to run the docker-compose file.

$ docker-compose --version
docker-compose version 1.24.1, build 4667896b

Now I was wondering if it actually worked… I crossed my fingers and went to http://localhost:8055 but nothing… What could it be…?

Attempt #2

It looked like some services were hanging in the Docker container and thus preventing everything to load correctly. I have tried switching services on and off but nothing seems be working. Maybe it is indeed related to an outdated Docker (Toolbox) setup…?

Attempt #3

And yes! Docker Toolbox does not actually run on ‘localhost’ because it runs in a Linux VM. So what is the IP of this VM…? 192.168.99.100!

$ docker-machine ip default
192.168.99.100

After navigation to http://192.168.99.100:8055/#!/infra we can see that the dashboard is up and running!

Note: Please note that if you are just using Docker that you can just use localhost.

Creating a S3 bucket

So how do we create S3 buckets and all the fancy pants magic? It’s simple! AWS Cli tools. I downloaded and installed them but how do you use these with LocalStack? Let me introduce you to the — end-point parameter.

Before we can start using the AWS Cli tools we need to configure our credentials. Now, things started to become confusing for me… Why do I need credentials and where do I get them? I just found out I can fill in some invalid stuff in the fields and it works for LocalStack, yay.

So how do we use the end-point paramter? We simply set it to the url and port which are defined on LocalStack’s GitHub, in our case we need to use 192.168.99.100 instead of localhost. Let’s say we want to list all the S3 buckets…

$ aws --endpoint=http://192.168.99.100:4572 s3api list-buckets
{
"Buckets": [],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}

This seems correct, we did not create any yet. Let’s create a bucket ‘my-bucket’ and check if it was added corretly. (You can also use the dashboard to view (but not create/edit) certain instances but I find the cli to be easier to use)

$ aws --endpoint=http://192.168.99.100:4572 s3api create-bucket --bucket my-bucket
$ aws --endpoint=http://192.168.99.100:4572 s3api list-buckets
{
"Buckets": [
{
"Name": "my-bucket",
"CreationDate": "2006-02-03T16:45:09.000Z"
}
],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}

Uploading a file to S3

So, we are able to create buckets at the moment but it would be rather dull if we could only create buckets with no content/files on them. Imagine we want to upload a cat picture to our S3 bucket that we previously created.

Source: https://pixabay.com/photos/cat-young-animal-curious-wildcat-2083492/

Locally I have created a folder named ‘my-bucket’ with the above image in it named ‘cat.jpg’. I use the s3 sync to sync to local folder (.) to my-bucket (s3://my-bucket) and then s3 ls to list all the files in my-bucket (s3://my-bucket).

$ aws --endpoint=http://192.168.99.100:4572 s3 sync . s3://my-bucket
upload: .\cat.jpg to s3://my-bucket/cat.jpg
$ aws --endpoint=http://192.168.99.100:4572 s3 ls s3://my-bucket
2019-10-21 11:54:52 540954 cat.jpg

It works! We are able to upload files to our mocked S3 service that runs in Docker. Pretty impresive if I may say so — it took some tinkering and figuring things out but it works!

--

--