Deploying a Multi-Container Web Application — AWS Elastic Beanstalk

Kartik Mittal
Analytics Vidhya
Published in
8 min readMay 17, 2020

Though there are a bunch of resources on building and deploying containerized web apps on AWS Elastic Beanstalk, there’s still some gap when it comes to building an end-to-end workflow for an application based on Micro-services architecture — from local development setup to finally building and deploying on AWS.

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/images/aeb-multicontainer-docker-example.png

Prerequisite—

  1. Assuming some familiarity with docker and web applications.
  2. A working application that can be containerized — you can refer to my sample implementation to follow through with this read. (branch names are self-explanatory, use the branch closest to your starting point)
  3. Some basic understanding of CI/CD workflows to get a high level understanding of what we are trying to achieve here.
  4. Basic idea of how AWS Elastic Beanstalk works with AWS EC2 instances and other resources under the hood. This is a great webinar just to get some clarity.

What’s covered in this read —

  1. Setting up AWS Elastic Beanstalk application and environment
  2. Creating ECS task definition (Dockerrun.aws.json) used by EBS , from a docker-compose file
  3. Configuring and running the application locally for verification using AWS EBS environment and ECS task definition, with the help of awsebcli
  4. Creating a build and deployment pipeline using Travis

[NOTE: Although the ideal approach for running a database is using a resource like RDS or DynamoDB and linking that with the application, this doesn’t involve VPC and security groups, that is not the scope for this read. And the sample application itself is for demo purpose only.]

Creating a new Elastic Beanstalk Environment

This section just follows the steps from official aws documentation

  • AWS Management Console — It is required that you sign-up and create an account on AWS, this can be done easily, once done you can log in and would be able to access the console.
  • Once you are here, under “Find Services” search for “Elastic Beanstalk” —
  • Here we have two tabs, “Applications” and “Environments”, we start by first creating a new application and later we will create a new environment in which to run that application —
  • After selection “Create Application”, give a name to your application, the important bit here is to select platform as “Docker” and platform branch as “Multi-container Docker..”
  • By default, we will start with a pre-configured sample application so we can leave it checked and create the application.
  • This should create an environment for your application by default
  • You can verify that you have entries under both — “Environments” and “Applications” tab
  • Next we use IAM (Identity and Access Management) Service for creating access keys and setting permissions for the environment, the keys would be used to connect to the environment when using the cli and will also be used by Travis for deployment
  • Here under “Users” tab, you need to create a new user using “Add User”
  • Make sure you select “Programmatic access”
  • Next we setup permissions, we will give all permissions to this user by attaching existing policy, search for the policy named — “AWSElasticBeanstalkFullAccess”
  • Next → Next → Create user, once the user is created, a screen with credentials, the access key and secret access key details will come up, make sure you download and save the keys. (You can retrieve or create new access keys for the user later as well)

Local Setup for EBS

  • Installing awsebcli — although there are some instructions in the official documentation but somehow it didn’t work for me. Also, there was an issue with the 3.18 version, so I had to downgrade and install 3.17.1 version. Strangely, it was only working with python 3.6 hence explicitly specified six.
$sudo -H pip3 install awsebcli==3.17.1 -upgrade -ignore-installed six
  • verify with —
$eb --version 
EB CLI 3.17.1 (Python 3.6.8)
  • Create the config with access keys in for the CLI , create a ~/.aws/config file (it should be there once awsebcli is installed) and set the contents to something as follows —
$cat ~/.aws/config[default]
aws_access_key_id=your_key
aws_secret_access_key=your_secret_key
region=us-east-2
  • The region can be found on the AWS console, use the region for the application that was created.
  • Creating the ECS Task Definition file — there is one very handy utility container-transform that helps in transforming a docker-compose file to Dockerrun.aws.json definition file. Once you install that, you can create a definition file which we can test with eb local run. The following command will show you the kind of definition file that will be created and also list the content that needs to added.
$cat docker-compose.yml | container-transform  -v
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "mongo",
"image": "km2411/flask-mongo-aws:latest",
"essential": true,
"memory": 200,
"cpu": 1,
"portMappings": [
{
"hostPort": 27017,
"containerPort": 27017
}
]
},

{
"name": "nginx",
"image": "km2411/flask-nginx-aws:latest",
"environment": [],
"essential": true,
"memory": 200,
"mountPoints": [],
"portMappings": [
{
"hostPort": 80,
"containerPort": 80,
"protocol": "tcp"
}
],
"links": [
"service-1"
]
},
{
"name": "service-2",
"image": "km2411/flask-service2-aws:latest",
"essential": true,
"memory": 200,
"cpu": 1,
"links": [
"mongo"
],
"environment": [
{
"name": "SERVICE2",
"value": "service-2"
},
{
"name": "SERVICE2_PORT",
"value": "9090"
}
]
},
{
"name": "service-1",
"image": "km2411/flask-service1-aws:latest",
"essential": true,
"memory": 200,
"cpu": 1,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"links": [
"service-2"
],
"environment": [
{
"name": "MONGO_HOST",
"value": "mongo"
},
{
"name": "MONGO_PORT",
"value": "27017"
}
]
}
]
}

[NOTE: This can have other configurations as well, like here have not created any volumes for the DB, but you can add as per the workflow]

  • Once the Dockerrun.aws.json file is ready, in the project directory we run
$eb init
  • From here we select the region applicable to our application, then select the application we earlier created from the console and respond with “No” for CodeCommit.
  • The project directory would look something like —
  • Now, one important thing to note here is the fully qualified name of docker images specified in the Dockerrun.aws.js file,
“image”: “km2411/flask-mongo-aws:latest
  • Since we need a way for AWS to pull the image before running the container, we could either use AWS ECR or push to our Docker Hub and specify those repositories
  • When testing locally, we can build images and tag them appropriately to math the definition file and then push to Docker Hub
#first build the images locally 
$docker-compose build
#tag the images created appropriately
$docker image tag service-1 km2411/flask-service1-aws
#push the image to docker hub
$docker login
#docker push km2411/flask-service1-aws
  • This we only need to do once manually when we are testing our ECS Task Definition file locally to make sure it would work on the server
  • Once all the images for the containers defined have been pushed we can test using —
$eb local run
  • This would spin all the containers, form the links and we can access our application locally
  • Now once this is verified, we can create our build →test →deploy pipeline using Travis

Travis →AWS : Build and Deployment

  • First we need to give permissions to Travis for it to access our repositories, assuming that is a fairly simple task, login to Travis with Github and allow access
  • Once we have linked Travis to Github, it will have the info about our repositories, and we can activate the repository to automatically trigger builds on pushing any changes
  • Inside settings for the respective repository, select the following or whatever is the workflow you need to follow, but just activate one.
  • Next we define a .travis.yml configuration file, something as follows —
language: pythonservices:
- docker
before_script: pip install docker-composescript:
# since there's a dependency on mongo, it will build that image also
# although we have not added any tests scenario
- docker-compose run service-2 sh -c "pytest"
after_success:
- if [[ "$TRAVIS_BRANCH" == "aws_deployment" ]]; then
docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD ;
docker-compose build ;
docker tag service-1 $DOCKER_USERNAME/flask-service1-aws ;
docker tag service-2 $DOCKER_USERNAME/flask-service2-aws ;
docker tag mongo $DOCKER_USERNAME/flask-mongo-aws ;
docker tag nginx $DOCKER_USERNAME/flask-nginx-aws ;
docker push $DOCKER_USERNAME/flask-service1-aws
docker push $DOCKER_USERNAME/flask-service2-aws ;
docker push $DOCKER_USERNAME/flask-mongo-aws ;
docker push $DOCKER_USERNAME/flask-nginx-aws ;
fideploy:
provider: elasticbeanstalk
region: us-east-2
app: flask-app
env: FlaskApp-env-1
bucket-name: elasticbeanstalk-us-east-2-126562325494
bucket-path: flask-app
on:
branch: aws_deployment
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
  • Now, here we are simply running tests on our changes and then after success pushing the code for deployment to AWS.
  • Important thing to note here are the environment variable s—
$DOCKER_USERNAME
$DOCKER_PASSWORD
$AWS_ACCESS_KEY
$AWS_SECRET_KEY
  • Also the info about our AWS application and environment in the “deploy” section, make sure the details match to what you had in you console.
  • For getting bucket info, you need to search for “S3" under the Service on the AWS Console. All the applications in free tier get some storage and here you can see the bucket that was created for the for EBS. The bucket path would be same as the application name.
  • Next, for the repository of our project, we will add the above environment variables in Travis, under “settings” for that repo, in “Environment Variables” section, specify details and add, something as follows —
  • Once all this is done, we can push our branch to the repository and Travis will take it from there to build , test and deploy to AWS.

Hope this was helpful :)

--

--

Kartik Mittal
Analytics Vidhya

A software engineer, passionate about learning new things and growing along the way!