Reading Environment Variables from S3 in a Docker container

Aidan Hallett
3 min readApr 10, 2018

Whilst there are a number of different ways to manage environment variables for your production environments (like using EC2 parameter store, storing environment variables as a file on the server (not recommended!)), or using an encrypted S3 object) I wanted to write a simple blog on how to read S3 environment variables with docker containers which is based off of Matthew McCleans How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker tutorial. Unlike Matthew’s blog piece though, I won’t be using Cloud Formation templates and won’t be looking at any specific implementation.

EDIT: Since writing this article AWS have released their secrets store, another method of storing secrets for apps.

  1. Creating an S3 bucket and restricting access

Our first task is to create a new bucket, and ensure that we use encryption here. In this blog, we’ll be using AWS Server side encryption. Create an object called: /develop/ms1/envs by uploading a text file. Ensure that encryption is enabled.

2. Creating an IAM role & user with appropriate access

The container will need permissions to access S3. We will create an IAM and only the specific file for that environment and microservice. The host machine will be able to provide the given task with the required credentials to access S3.

Navigate to IAM and select Roles on the left hand menu. Click Create a Policy and select S3 as the service. We only want the policy to include access to a specific action and specific bucket. Select the GetObject action in the Read Access level section. Select the resource that you want to enable access to, which should include a bucket name and a file or file hierarchy. For example the ARN should be in this format:

arn:aws:s3:::<bucketname>/develop/ms1/envs

3. You should then create a different environment file and separate IAM policies for each environment / microservice.

4. Assign the policy to the relevant role of the EC2 host. If you are using ECS to manage your docker containers, then ensure that the policy is added to the appropriate ECS Service Role. Likewise if you are managing them using EC2 or another solution you can attach it to the role that the EC2 server has attached.

5. Creating a docker file. For my docker file, I actually created an image that contained AWS CLI and was based off of Node 8.9.3. The script below then sets a working directory, exposes port 80 and installs the node dependencies of my project.

Dockerfile

Let’s focus on the the startup.sh script of this docker file. In this case, the startup script retrieves the environment variables from S3. Once retrieved all the variables are exported so the node process can access them. The script itself uses two environment variables passed through into the docker container; ENV (environment) and ms (microservice). Remember it’s important to grant each Docker instance only the required access to S3 (e.g. the Develop docker instance won’t have access to the staging environment variables.

startup.sh

The environments file is in the form;

env file

Once you have created a startup script in you web app directory, run;

chmod +x startup.sh

To allow the script to be executed. The startup script and dockerfile should be committed to your repo.

Now when your docker image starts, it will execute the startup script, get the environment variables from S3 and start the app, which has access to the environment variables.

It’s also important to remember to restrict access to these environment variables with your IAM users if required!

Please feel free to add comments on ways to improve this blog or questions on anything I’ve missed!

--

--