Deploy containerized applications using AWS Copilot CLI
How we can deploy containerized apps easily?
Required knowledge
This blog entry assumes you have some basic knowledge on how Amazon ECS works. It also requires a basic understanding on Docker and how web applications work in general.
Context
At Sudo Labs we were needing a simple application for managing hospitals in one of our projects. The idea was to develop this app as fast a possible while maintaining the same stack that our customer was using.
After a quick research we decided to use Django which is basically a web framework for developing apps in python. We wanted to deploy this application using AWS ECS to be aligned with the rest of the project architecture.
App Architecture
This architecture is described in the docs. It’s basically a web application deployed in an ECS cluster using Fargate. There is an addition to this architecture where we are storing the data using RDS that will be explained later in the doc.
Why Copilot
We wanted a tool for being able to create environments programmatically like terraform and also support deploying the app changes in an easy way. Also we needed a way to execute tasks (like migrations) on the different environments.
We found a tool that could help us building, deploying and running different tasks and we wanted to try it, that tool is copilot CLI (https://aws.github.io/copilot-cli/).
Copilot configuration
After following the instructions for installing copilot CLI (https://aws.github.io/copilot-cli/docs/getting-started/install/ ) we created a test environment following the instructions with the command copilot app init —domain staging.test.example.com
.
It is very likely you will want to set the --domain domain.com
flag in this step because it is the only time you will be able to set a custom domain.
This created the infrastructure we needed and also a manifest file that represents it.
Environment Variables
There are some env variables that the app needs we will need to set somehow. Copilot manifest file allows us to add variables to specific environments (test in this case). Here is how the configuration was set:
Secrets
There are also some other variables that contain sensitive information like AWS credentials, API keys, etc. Copilot supports adding them via AWS Systems Manager Parameter Store (SSM). For creating a secret we used the copilot secret init command. The command basically will ask for the key and the value of the variable and then it will output the instructions on how you should include it to the manifest file.
You can see them in AWS in the Parameter Store AWS section, they always start with /copilot.., some of them are created when you init the application:
Deploy
For doing a deploy we use the copilot deploy command. Because for now we only have the test environment we run copilot svc deploy --name app --env test
.
This command basically re-generates the docker image, pushes it to the ECR repository and updates the running service with that image.
When the commands finishes it will prompt the URL of the app running in the test environment. Because we set an alias the url is https://staging.test.example.com:
✔ Deployed hospital-manager-app, you can access it at https://staging.test.example.com.
Tasks
Sometimes we need not only to deploy the application but also run some tasks. One example would be running migrations. The way we can do this is using the copilot task run command. This will basically build the same image but instead of deploying the app it will run the command you specify.
One of the “problems” this command has is that it doesn’t use the env / secret variables specified in the manifest file, so we will need to include them manually. Here is the command we run for executing migrations:
As you can see the command is python manage.py migrate, also we specify the test environment with the — env argument. We include only the secrets we need that, in this case, are the database url and the Django secret key.
One tip for running it easily would be including a Makefile and running it using the make command. More info about that in https://linoxide.com/linux-make-command-examples
And we can run it just executing make migrate-test
.
Other tasks we need in this particular project were running the assets pipeline and creating an admin user, we won’t include examples in this blog but they were implemented in a similar way.
Database
Copilot supports adding a database as part of the architecture, at the moment of writing this post it only supports Aurora Serverless. We decided to create the DB manually and attach it to the generated VPC because we didn’t need a big DB for our test environment and the smallest one is half the price of the smallest from Aurora ( about $15 per month vs $30 references here and here ).
Do you like this implementation? Do you have any comments or suggestions? Please leave us a comment, we would love to hear from you.
Interested in getting a serverless environment set up to help scale your business? Reach out to us at Sudo Labs to see if we can help.