Working with AWS and Sentry through a CLI by running it in a Docker container

Alexey Samoshkin
7 min readAug 23, 2021

--

Docker official logo image

Cloud providers like AWS or GCP typically provide you with an HTTP API, which can be used to interact with those services programmatically. This is not limited to cloud providers only, for example, the popular error tracking service Sentry, which I was working with recently, comes with an HTTP API as well.

API is useful for different kinds of integrations and automation. For example, within your CI/CD pipeline, you might want to notify Sentry about an upcoming release version and upload source map files for minified and optimized JS bundles prepared by the Webpack.

Services like Sentry or AWS go even further and provide CLI tools, that abstract developers away from learning and understanding nuances of HTTP interactions, e.g. how to craft the request properly, what is the URL to send the request, how to parse and read the response.

All you need to do as a developer is to install a particular tool on your local machine and authenticate yourself by acquiring some kind of access token.

The question is: what’s the point of installing some binaries on my local machine and littering in a couple of directories if I can just use the Docker image and run the same command by spawning it within a Docker container.

Working with S3 through AWS CLI by running it in a Docker container

First, let’s pull the official amazon/aws-cli Docker image.

$ docker pull amazon/aws-cliUsing default tag: latest
latest: Pulling from amazon/aws-cli
d36ac8072fa2: Already exists
c189463b6c08: Pull complete
cd55a4a954a6: Pull complete
afcddbfdb53e: Pull complete
16858f14c6a1: Pull complete
Digest: sha256:488f27c19e1acb098a3eef27818102ebe510ec44f454fbfde9b7887a448b763d
Status: Downloaded newer image for amazon/aws-cli:latest
docker.io/amazon/aws-cli:latest

One-time configuration

You would need to configure few settings, including the access key ID and the secret associated with a particular IAM user.

Obtain access key ID and secret from the IAM dashboard

Configuration settings are stored in the $HOME directory at the ~/.aws path. There’re two files: config and credentials.

$ ls -1 ~/.aws
config
credentials

For persistency purposes we will keep those files on our local machine and pass them through a volume mount inside a container: ~/.aws => /root/.aws . Optionally, it’s useful to add an extra mount point to share files between the local file system and the docker container: $(pwd) => /aws

$ docker run --rm -it -v ~/.aws:/root/.aws -v $(pwd):/aws amazon/aws-cli --help
usage: aws [-h] [--profile PROFILE] [--debug]
optional arguments:
-h, --help show this help message and exit
--profile PROFILE
--debug

When you work on different projects (e.g. home, work) each having different IAM users and access credentials, AWS provides you with a concept of a PROFILE to manage and use multiple identities at the same time. It’s a very convenient feature. For this blog post, I will use the pet_project for the PROFILE name.

Since we haven’t run through the initial configuration, let’s do it right away. Use aws configure subcommand, which will interactively prompt you several questions.

$ docker run --rm -it -v ~/.aws:/root/.aws -v $(pwd):/aws amazon/aws-cli configure --profile pet_projectAWS Access Key ID [None]: AKIAIH627HJ6TNNEGPIQ
AWS Secret Access Key [None]: .....
Default region name [None]: us-east-2
Default output format [None]: json

After the command has run to completion, the Docker container is stopped and is automatically removed. That is, each command invocation spawns a new short-lived container.

Let’s explore the contents of the ~/.aws/config file, which is in the INI file format. Notice how settings from different profiles are kept under different sections. The same holds for ~/.aws/credentials file, that stores access keys ID and secrets. This is how AWS CLI allows working with multiple identities simultaneously.

[profile work_local_env]
region = eu-central-1
output = json
[profile pet_project]
region = us-east-2
output = json

Now, let’s explore available S3 buckets. The profile is specified through a command-line argument.

$ docker run --rm -it -v ~/.aws:/root/.aws -v $(pwd):/aws amazon/aws-cli --profile pet_project s3 ls2021-08-23 16:35:34 pet-project-full-resolution-images

An alternative and more convenient way of selecting the profile would be exporting the AWS_PROFILE environment variable.

$ export AWS_PROFILE=pet_project$ docker run --rm -it -e AWS_PROFILE -v ~/.aws:/root/.aws -v $(pwd):/aws amazon/aws-cli s3 ls s3://pet-project-full-resolution-images2021-08-23 17:01:37    1931986 2020-11-05_17-33-02.png
2021-08-23 17:01:38 2117010 2020-11-06_10-57-17.png
2021-08-23 17:01:38 242779 2020-11-12_12-09-37.png
2021-08-23 17:01:38 2450599 2020-11-14_12-31-54.png
2021-08-23 17:01:38 2779765 2020-11-25_18-50-09.png
2021-08-23 17:01:38 103076 2020-11-25_19-10-15.png

Manage environment variables on a per-directory basis using the direnv tool

How and when you configure the environment is up to you. You might set variables in your SHELL configuration files (e.g ~/.bash_profile or ~/.zshenv). Personally, I prefer using the tool called direnv. It allows me to maintain different sets of environment variables per directory. It requires creating a special file .envrc , which is automatically loaded by the tool when I cd into the directory.

$ pwd
/Volumes/DATA/projects/personal/blog_on_medium
$ ls -1 ./using_cli_through_docker/.*
./using_cli_through_docker/.envrc
$ cat ./using_cli_through_docker/.envrc
export AWS_PROFILE=pet_project

Now, when I cd into the directory for the first time, direnv will refuse to source the file, since I haven’t approved it yet. This is a security measure since an automatically loaded .envrc file is a great place for code injection attacks.

$ cd using_cli_through_dockerdirenv: error .envrc is blocked. Run `direnv allow` to approve its content.

To approve the file, run the direnv allow command. Note that any future changes to the file will require you to re-approve it once again for security purposes.

$ direnv allowdirenv: loading .envrc# check if the AWS_PROFILE is set
$ echo $AWS_PROFILE
pet_project

Usually, I have a dedicated .envrc file per each project I’m working on. It’s git ignored and contains various local-only settings and secrets, which are not supposed to be shared with a team. When I jump between different projects, direnv automatically unloads old and loads the new file. It works behind the scenes, so sometimes I even forgot that I have such a great tool.

Create SHELL alias for ease of use

So far so good. But it’s still tough to type the long docker command while trying not to forget all the details: how to correctly mount volumes, pass the right environment variables. The better solution would be extracting the command and creating a SHELL alias. Since I’m using the zsh, I’ll declare an alias in the ~/.zshrc file.

# Alias to work with AWS CLI through the docker image
alias aws='docker run --rm -it -e AWS_PROFILE -v ~/.aws:/root/.aws -v $(pwd):/aws amazon/aws-cli'

Now the usage of AWS CLI would look like this, which is indistinguishable from the regular installation approach.

$ aws --versionaws-cli/2.2.31 Python/3.8.8 Linux/4.19.76-linuxkit docker/x86_64.amzn.2 prompt/off$ aws s3 ls s3://pet-project-full-resolution-images2021-08-23 17:01:37    1931986 2020-11-05_17-33-02.png
2021-08-23 17:01:38 2117010 2020-11-06_10-57-17.png
2021-08-23 17:01:38 242779 2020-11-12_12-09-37.png
2021-08-23 17:01:38 2450599 2020-11-14_12-31-54.png
2021-08-23 17:01:38 2779765 2020-11-25_18-50-09.png
2021-08-23 17:01:38 103076 2020-11-25_19-10-15.png

Using the Sentry CLI by running it in a Docker container

The same approach could be applied to whatever CLI tool, given that the corresponding docker image exists.

Let’s take a look at Sentry CLI. On the current project, I’m going to use the Sentry CLI within a CI/CD pipeline to notify Sentry about the new release version and to upload JS source map files of React app for better stack traces.

Fortunately, the Sentry team distributes their CLI tool as a docker image.

$ docker pull getsentry/sentry-cliUsing default tag: latest
latest: Pulling from getsentry/sentry-cli
339de151aab4: Pull complete
2e65fd2d37d5: Pull complete
4518c0069cf2: Pull complete
6543a499611e: Pull complete
2ade6052dc88: Pull complete
Digest: sha256:31b8d58091557b8266a143cd8b1ce2e683cd734f65ba145251abc31c71d4e304
Status: Downloaded newer image for getsentry/sentry-cli:latest
docker.io/getsentry/sentry-cli:latest

Configuration is kept in the ~/.sentryclirc file. Read through the configuration and authentication guide. If you’re running a self-hosted Sentry specify its URL. Other non-secret data include your organization name and project name. The project name is optional since you can specify it later via a command-line argument.

[defaults]
url=${YOUR_SENTRY_URL}
org=${YOUR_SENTRY_ORGANIZATION}
project=${YOUR_SENTRY_PROJECT_NAME}
[auth]
token=${YOUR_SECRET_ACCESS_TOKEN}

The secret part is the auth token, which you can generate from Sentry Web UI at the “Settings -> Account -> API -> Auth tokens” page.

Obtain an access token from Sentry Web UI

Unlike AWS, .sentryclirc file does not have a concept of PROFILE to manage multiple identities. So I chose to keep the .sentryclirc file inside the project directory, rather than in the $HOME directory. Thus, each project directory would have an individual .sentryclirc file.

Here is a very simple command, that lists all configured projects.

$ docker run --rm -v $(pwd)/sentry/.sentryclirc:/root/.sentryclirc getsentry/sentry-cli projects list+----+----------------------+------+----------------------+
| ID | Slug | Team | Name |
+----+----------------------+------+----------------------+
| 4 | app_project_1 | dev | project_1 |
| 2 | app_project_2 | dev | project_2 |
| 3 | app_project_3 | dev | project_3 |
+----+----------------------+------+----------------------+

Given I stick with the same convention of having the .sentryclirc file at the ./sentry/.sentryclirc path on each project I’m working on, I can create a reusable SHELL alias as well.

# Alias to work with Sentry CLI through the docker image
alias sentry-cli='docker run --rm -v $(pwd)/sentry/.sentryclirc:/root/.sentryclirc getsentry/sentry-cli'

Then, the usage would be nothing different from the regular Sentry CLI installation.

$ sentry-cli -V
sentry-cli 1.68.0
$ which sentry-cli
sentry-cli: aliased to docker run --rm -v $(pwd)/sentry/.sentryclirc:/root/.sentryclirc getsentry/sentry-cli

Conclusion

Docker is an amazing tool, which proves to be useful even in such unexpected scenarios.

--

--