How we deploy our branch

Cristian Covali
Crunchyroll
Published in
4 min readFeb 11, 2022

Writing code is fun. You can build great things, challenge your brain to solve complex problems, and of course, everything works perfectly “on your local machine.” Sometimes you face the situation when you need to set up your project on a colleague’s machine, and things don’t go as smoothly as they “should.”
You ran into a situation when he either doesn’t have docker, doesn’t have the correct version of dependency 1, dependency 2, his machine doesn’t support one or another thing, and you waste more time digging and trying to debug when the real problem is that you want to share the feature you developed. Here at Crunchyroll, one of the requirements we have in terms of code review is that you should perform a short QA on the code you are reviewing. This means you should check out the branch, install the dependencies, build the project and start it, just to check if the color/width/height of a button is correct or not.

Deploy your code instead of setting it up

Knowing what the problem is, is the first step to solving it. We face the problem that we want to share the functionality/feature we develop with someone else QA/PM/DEV with minimal effort. So instead of setting up the project on their machine, we decided that we wanted to deploy our code and make it accessible via simple applications like a browser.

Choosing the right tools to deploy your code

There are many tools to write your infrastructure as code like AWS CloudFormation, Red Hat Ansible, Chef, Puppet, SaltStack, HashiCorp Terraform, etc. We decided to use terraform for this task, mainly because it is open-source, community-driven, and has a very elegant syntax.
Also, since we work very closely with AWS, we decided to use ECS as a container engine for our branch environment and ECR as the container registry. It’s the era of containers anyway. Building and registering the container can be done via any CI; therefore, the smartest choice would be to simply use what we already have for unit tests, linters, and other code quality checks.

Let's put things together

Since we know the tools we want to use, we need to orchestrate them. First, we need to have a build job in our project CI pipeline. Project configuration can be handled in different ways like files, volumes, environment variables. The simplest way to do this when working with containers is environment variables, which works perfectly when you don’t have many parameters.

build_and_push_image:
steps:
- aws-ecr/build-and-push-image:
account-url: AWS_ECR_ACCOUNT_URL_ENV_VAR_NAME
aws-access-key-id: ACCESS_KEY_ID_ENV_VAR_NAME
aws-secret-access-key: SECRET_ACCESS_KEY_ENV_VAR_NAME
dockerfile: myDockerfile
region: AWS_REGION_ENV_VAR_NAME
tag: 'latest,myECRRepoTag'

One extra job we have in our flow is a job that synchronizes our static files. After that, we need to register our image into ECR, and finally, we can apply our terraform files to deploy the environment.

sync_static_assets:
steps:
- checkout
- docker_image_load
- docker_create:
img_name: app
- run:
name: Extract build files
command: docker cp app:/app/build build
- aws-cli/setup:
aws-access-key-id: AWS_ECR_ACCOUNT_URL_ENV_VAR_NAME
aws-secret-access-key: SECRET_ACCESS_KEY_ENV_VAR_NAME
aws-region: AWS_REGION_ENV_VAR_NAME
- aws_s3_sync:
from: build
to: s3://AWS_BUCKET_NAEM/$BRANCH

We need to ensure that each environment doesn’t have conflicts with other ones. We can use either terraform workspaces for that or look for other tools like terragrunt, which offer different levels of flexibility when working with multi-environment terraform modules.

terraform_apply:
steps:
- checkout
- aws-cli/setup:
aws-access-key-id: AWS_ECR_ACCOUNT_URL_ENV_VAR_NAME
aws-secret-access-key: SECRET_ACCESS_KEY_ENV_VAR_NAME
aws-region: AWS_REGION_ENV_VAR_NAME
- run:
name: Deploy infrastructure into AWS
command: terragrunt apply -auto-approve

In the end, our workflow is going to look like this:

deploy:
- build_and_push_image
- sync_static_assets
- terraform_apply

Make sure to clean up things

Now since we have our environment up and running, we can easily share it with anyone who needs to take a look at the feature we implemented. But things are not finished yet. Once our feature was tested, approved, and merged, we no longer need the environment we deployed; therefore, we need to orchestrate an undeploy pipeline for that. The easiest way to achieve this is via our VCS webhooks. Once a pull request is merged, we can set up a webhook to undeploy the environment, clean up the static files, and get everything cleaned.

undeploy:
- terraform_destroy
- remove_static_files
- unregister_ecr_image

Future plans

Having an environment deployed for each our branch allows us to implement features like:

- running automation tests on commit level;
- performing lighthouse automated audits;
- writing custom performance audits.

This opens the path for a healthier and safer development cycle. We are actively thinking about new areas of improvement. Thank you for the read, and I hope this experience will help you deploy your code.

--

--

Cristian Covali
Crunchyroll

Writing code for fun. In love with JavaScript, sometimes BlockChain.