AWS CodeStar with multiple deployment environments

Uri Brecher
6 min readMar 19, 2019

AWS codeStar is a great project management service from AWS. It lets you start new projects using a large selection of templates and gets you up and running quickly with some nice boilerplate code. You get a full CI/CD pipeline to automate the process of building / testing / deploying using AWS codePipeline service.

codePipeline is very nice and handy, and rather easy to use. It allows you to integrate other CI/CD tools such as Jenkins if you prefer to work with external tools and don’t mind using a hybrid solution. The Vanilla codePipeline that comes with the codeStar boilerplate code is a rather basic setup consisting of 3 stages:

  • source — pull from revision control branch (triggered automatically by the version control system, each time a commit is submitted to the branch).
  • build — and run automatic tests
  • deploy — using AWS cloudFormation

codeStar supports “Serverless Application Model” (S.A.M) out of the box, for serverless projects. S.A.M simplify the process of defining your serverless cloud resources.

I found one big gap in this ecosystem: the whole project has a single deployment environment. This is fine so long as my work was in development phase, but once you go into production I felt that something was missing. I could of course run all my code through the automatic unit tests, but working with a single deployment that is tagged as “production” means that you cannot run end-to-end tests BEFORE deployment, and so I looked for a way to create a second deployment environment to streamline the development process.

My first attempt was to add another stage to the CI/CD pipeline. I thought it would be nice to have a “deploy to test” stage before the original deploy stage that deploys into production.

single pipeline with two deployment environments

It was quite straightforward and worked well until a sneaky bug found its way into production environment even though the code was thoroughly tested with some end-to-end tests and some manual tests! When that happened I had to narrow down on the buggy code (in the testing environment), sometimes it can be a quick code fix, but occasionally it would be a separate development branch with integration points.

sometimes it would be a quick “revert last commit”. In that last use-case, you would have to revert the code back to a previous commit, deploy to test (make sure the problem is solved in test), then deploy to production. Sometimes the bug is so obvious that going through the whole process of deploying to test first and then to production felt a bit tedious and redundant, though it does enforce a “best practice” process.

At some point this simple CI/CD process felt a bit constraining and so I decided to go a slightly different route and provision a separate CI/CD pipeline for each environment instance.

one pipeline for each deployment environment

Since codeStar automatically creates one CI/CD pipeline and the whole pipeline definition is stored in a special cloudFormation code of which I do not have full control of, I decided to leave it be. I focused on creating a second pipeline that looks very similar to the first except for two main differences:

  • the second pipeline pulls code from a different branch (an integration branch).
  • the deployment stage is using a different set of environment variables. These environment variables are used to control the deployment behaviour. I will explain about that in a few paragraphs

So now wehave two pipelines, each one is tied to a different branch. When I want to try something in the test environment I would push my changes to the integration branch. When I want to deploy to production I would push to production branch. Usually I would try to stick with the original process where all changes are first pushed into integ branch, then I would merge from integ to prod branch and finally push prod branch (to trigger a deployment).

Since each environment is now following a different branch the environments are allowed to diverge in all the ways that a version control system allow. I can decide to do cherry-pick merges to prod branch and defer some unfinished features to later integrations. Developers usually start a development branch based on a commit in the integ branch, they are expected to test their code before merging back into integ to keep that branch stable as much as possible. integ branch is mainly used for end-to-end testing, beta-testing and other manual tests.

Working with one pipeline for each environment scales better. If I want a third environment or even dedicated environment for each developer, I can do so quite easily and the operational costs are rather low:

  • Serverless resources are billed per usage
  • AWS codePipeline costs $1 per active pipeline per month (March 2019).

Using multiple environments with the first approach (by adding a deployment stage) makes the pipeline cumbersome and tie together processes that usually are orthogonal.

Controlling the deployment with environment variables

First, why would we even want to do that? If you look closely in the diagram you will see a box tagged as “other cloud resources”. The whole point of micro-services are that they are micro, meaning that they are just one part out of a bigger whole. In a real-world example a small system would consist of a few microservices and a large one would have a few dosens.

Having multiple deployment environments for each microservice may put the dependency management challange in a whole new level of complexity. One way to tacle this challange is by assuming a complete isolation between deployment environment. A test system would be completely isolated from the production environment, etc…

I adopted a namespace mechanism where endpoints of all micro-services would be prefixed with the environment name. Here is an example:

example.com/test/service1
example.com/test/service2
example.com/test/service3
example.com/prod/service1
example.com/prod/service2
...

Inside the backend’s codebase I have a single text file that contains all services endpoints but with a place-holder instead of the environment name:

example.com/<deploy-env>/service1
example.com/<deploy-env>/service2
example.com/<deploy-env>/service3
...

My code would then read this text file and replace the deploy-env place-holder with the specific deployment environment that the code is running in. The concrete deployment is read from environment variables. With S.A.M you can pass codePipeline deployment parameter values down to the lambda functions as environment variables with the following template.yml snippet:

Parameters:
projectId:
Type: String
Description: CodeStar projectId used to associate new resources to team members
AWSEnvironment:
Type: String
Description: can be either test or prod

Globals:
Function:
Environment:
Variables:
AWS_ENV: !Ref AWSEnvironment

In this yml script you can see that I added a ‘AWSEnvironment’ parameter, and then in the Globals/Function/Environment/Variables tag I add an environment variable and reference the template parameter. I put it in the global section to make sure the same environment variables are shared among all microservice functions.

All that is left is to access the right place in codePipeline and manually set the AWSEnvironment parameter for each pipeline. Here is a short walkthrough:

  1. Open your codePipeline and push the edit button.
  2. Go to the deploy stage and edit that stage.

4. Edit the createChangeset action (with the small pencil and paper icon).

5. Inside the createChangeset action go to advanced section and push the ‘collapse section’ button.

6. Parameter overrides is where I put the AWSEnvironment values, in the screenshot you can see the test environment.

Summary

codeStar is a great service to get you up and running quickly with serverless and micro-services, but it is lacking native support for multiple deployments which you can easily bridge using some of the suggestions in this article.

Thank you for reading all the way through, I hope you enjoyed it. I appreciate any form of feedback.

--

--