Photo by Shane Avery on Unsplash

Dynamic App Configuration: Passing secrets and sensitive values safely to applications at runtime using Docker and AWS

Abdelkarim El Moussaoui
NEW IT Engineering
Published in
6 min readApr 1, 2020

--

In the previous post “Dynamic App Configuration: Inject configuration at runtime using Spring Boot and Docker”, we have seen how scanning and retrieving environment variables becomes flexible with Spring Boot, and how helpful Docker is in providing those configuration parameters at runtime. However, passing sensitive data (e.g. credentials, API keys, etc.) directly to the Docker environment might make the whole system vulnerable against security attacks. In this article, we will discover one of the safest methods to feed sensitive parameters into an application at runtime and without revealing their values towards a third party.

Docker environment variables

It’s a good concept to separate an application from its configuration, in order to keep the business logic clean from any configuration parameters. Of course, it means that we have to provide this configs at application’s deployment or runtime. In this regard, Docker provides us a very helpful way to accomplish this using its environment variables’ interface which could be invoked by multiple ways:

Using Docker -env, -e assignment

When we create a docker container, we can directly pass the needed variables as key-value pairs into the container creation command either using the assignment parameter -env or its shorthand form -e:

$ docker run -e VARIABLE1=value1

Using -env-file

Using the key-value pairs is only adequate if the number of parameters to be passed is manageable. However, as soon as this number grows, it can quickly become cumbersome. Alternatively, we can use a text file to hold all our parameters using the default key-value format and pass it into the container creation command:

Example 1: Creating key-value pairs and storing the file as env-variables.txt

$ echo VARIABLE1=value1 >> env-variables.txt

$ echo VARIABLE2=value2 >> env-variables.txt

Example 2: Embedding the created file into the command

$ docker run -env-file env-variables.txt

Case of sensitive values

More often, one or more values among the parameters to be passed will be a password of a database, an API key or a token to an external service. In this situation, a special caution is requested: passing those sensitive values directly into the command (either as key-value pairs or as file) is probably the least secure way, because it holds a great risk of getting those values leaked unexpectedly. At this point, it’s very important to understand, that any user with access to the Docker runtime could get the sensitive values from that container using the inspect command as follows:

$ docker inspect 7b2b014a3571

Output:

// ...

"Config": {

// ...

"Env": ["VARIABLE1=value1", "VARIABLE2=value2",

// ...

]

}

As we can see, all passed parameters are printed out including the sensitive values in plain text.

In the most cases it’s unlikely that unauthorised people get access to the Docker runtime, except in case of hacking or leak of credentials (that is another case in itself!). But what about working with multiple teams on a shared environment? For such a situation, where security is a big concern, there are multiple solutions to avoid leak of secrets, such as using Docker Secrets. However, we will go at the root of the problem and try to restrict or avoid even the access to the Docker runtime using AWS Fargate. Moreover, we will discover a very nice solution to keep sensitive values secure using the AWS Systems Manager Parameter Store.

AWS Systems Manager Parameter Store (SSM)

The SSM Parameter Store provides a full-managed and hierarchical platform for storing and managing configuration data and secrets. Passwords, EC2 Instance IDs, Database Connection Strings, API keys or even User Credentials could be stored as parameter values either in plaintext or encrypted. A stored value could then be retrieved by referencing its unique ID (or the Amazon ARN) assigned during creation. Values stored on SSM could be used by various other Services such as Amazon EC2, AWS Lambda, Amazon Elastic Container Service etc.

Instead of putting our environment variables directly into the container creation command or save them in a file, we will use the SSM Parameter Store service to securely store our values and keep them away from unauthorised access:

Parameters created on SSM Parameter Store.

AWS Fargate

The AWS Fargate is a full-managed and a serverless compute platform for managing containers working with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate lets you just concentrate on developing your applications and services and takes over everything concerning container and server management. It leads further the pay per usage model and lets you pay for resources per application. Furthermore, it improves security by applying the application isolation design: that means your application is deployed within a cluster you should have defined and specified before, but fully serverless!

What makes the difference?

Fargate Usage Benefits. Source: https://aws.amazon.com/fargate/?nc1=h_ls

Nevertheless, you should specify the containers running your application (i.e. specify image URL, define required memory and compute resources, provide environment variables, etc.). Although you cannot access directly the underlying servers, Fargate allows you to get useful insights (such as application logs) using AWS Cloudwatch integration.

After creating a Fargate cluster, we have to specify a Task Definition:

Creation of a Task Definition.

A Task Definition is required to run Docker containers in Fargate. Some of the parameters you can specify in a Task Definition include:

  • The Docker image
  • CPU and memory usage per task or per container
  • The launch type (ECS EC2 or Fargate)
  • The logging configuration
  • The command to be run on container start
  • The IAM role to be used
  • Container environment variables

The latter is what we are going to introduce here as the “bridge” between our securely stored parameters on the SSM Parameter Store and the Docker runtime on Fargate.

Assigning Environment Variables for a Docker Container.

The interesting part here is the “Environment variables” section. Here we can link to our sensitive data using the ARN of the parameters stored on SSM Parameter Store without revealing their real values. To achieve this, we have to select “ValueFrom” instead of “Value”, because then the Task Definition will extract the value of the parameter using its AWS ARN and pass it to the Docker container at runtime instead of parsing the ARN string itself!

N.B: the ARN (Amazon Resource Name) for an SSM Parameter is structured as follows: arn:aws:ssm:AWS-Region:AWS-Account-ID:parameter/Parameter-Name

With the following information:

AWS-Region: the AWS region where the parameter was created

AWS-Account-ID: the account ID

Parameter-Name: the parameter name assigned at creation time

Conclusion

The usage of Spring Boot and Docker provide an easy way to inject application configuration at runtime. Nevertheless, it’s more than often that we have to pass some sensitive data to the application (e.g. secrets, passwords, keys, etc.). In this post, we have discovered together how to combine the power of Docker and AWS SSM Parameter Store + Fargate to securely store sensitive data. Thus, it could be passed safely to an application using the Docker environment variables’ interface indirectly through a Fargate Task Definition.

--

--