A Comprehensive Guide to Running GitLab on AWS

A series of posts that will examine why we chose GitLab at Alchemy and teach you how to automate your own installation on AWS using Terraform, Packer, and Ansible

Alois Barreras
6 min readApr 18, 2018

Today, we are starting a series of blog posts on why we chose GitLab as our primary CI/CD tool and how you can run your own installation. If you haven’t used it before, GitLab is an open source git repository management tool that includes many nice features such as issue tracking, CI/CD, wikis, etc.

At Alchemy, we use GitLab as our primary tool to manage and orchestrate application deployments that power some of the world’s largest brands. In this intro post, we’ll discuss what Alchemy is and why GitLab suits our needs. In the rest of the series, we will cover the following topics:

  1. Architecture Overview of GitLab on AWS
  2. Automating running GitLab on AWS with Terraform, Packer, and Ansible
  3. More to come soon

Let’s get started.

What is Alchemy?

I joined Alchemy earlier this year to help create a modern cloud and data infrastructure at Procter and Gamble (P&G). Our company’s goal is to build the tools that will help deploy and monitor software for the world’s largest consumer goods company. There are several projects in this initiative that span from a data streaming pipeline to centralized logging with ELK to an internal PaaS, among others. During the early days as we were scribbling on whiteboards planning what we were going to build, everything was great. We knew what problems we were solving, we had a clear vision how to get there, and we were excited to start coding and deploying software. However, we knew that as we grew and more teams and systems were added to the stack, things could easily get out of hand.

There are dozens upon dozens of departments and teams within P&G. Each of them with their own technology stacks, business requirements, and forward-looking goals. There are projects across AI, big data skunkworks, consumer-facing applications, internal tooling and testing, and more. Therefore, we knew we needed a rock solid process and stable infrastructure that was flexible enough to satisfy these diverse requirements because we needed to unify these widely varied processes across the organization.

However, large IT organizations do not always like change. We knew whatever we built had to be easy to use and look great so we wouldn’t have to go through the headache of begging other teams to migrate to our platform. We wanted to build something better than what anyone had so they would come willingly.

Finding the right solution

At my previous companies, I had mainly used CircleCI, and I have nothing but great things to say about the product. It’s simple to use, looks great, and does everything I want. However, at Alchemy we work for P&G and any external tool we use typically goes through a very long process where legal and business teams talk to each other, and at the end of it we may not even get approval. Therefore, CircleCI was not an option we wanted to pursue because we did not want to risk investing a lot of time into a tool we might not get approved to use.

So what to use then? Myself and the other members of the platform team at Alchemy have a philosophy that any new tool/software we use should be:

  • Open source
  • Have an active community
  • Easy to run and manage
  • User friendly

In addition to sticking to our philosophy, we had other requirements for our CI/CD pipeline we needed to address. First, the builders need to be able to run on Windows because many application at P&G are written in traditional ASP.NET (not .NET Core), so they can only be built on Windows machines. Second, builders need to be able to dynamically scale up and down on Kubernetes. There’s a huge cost savings to running builds in ephemeral docker containers rather than having dedicated builder VMs. Finally, we needed to be able to easily deploy applications to Kubernetes. We are moving to running as many applications we can on Kubernetes, so this was a must.

After evaluating some of the major players in the space that satisfied our requirements (Jenkins, Concourse, GoCD, etc), we were still unsatisfied. We were ready to throw in the towel and just go with GoCD (it’s still a pretty good product), but then we saw GitLab has an integrated CI/CD feature in its platform. It hadn’t crossed our minds before because it is not technically a standalone CI/CD product, but it checks all our boxes with the added bonus that the runners are written in Go (we love Go at Alchemy). So we created a GitLab account and made a hello-world repo in which we would try to Dockerize and deploy to AWS using GitLab’s CI/CD pipeline.

Long story short is that we loved it. The UI is sexy, it’s super user friendly, and it just works. You can even define your build pipelines in yaml, and we really like defining pipelines in yaml as opposed to Groovy in Jenkins ¯\_(ツ)_/¯. But, just a hello-world application is not a very rigorous test of features. Alchemy has a Scala monorepo, so we decided to see how easy it is to orchestrate deploying a couple packages from it. Here’s what the final pipeline looks like for deploying a couple serverless applications and promoting them throughout our different environments.

You can see here that the CI/CD pipeline starts by building the applications in the monorepo. Then, it branches out into testing the individual applications and branching out further into the different environments (dev, stage), and these jobs can run based on what branch you specify in your .gitlab-ci.yml file. At Alchemy, anything on develop goes to our dev environment and anything merged to master will be promoted to a staging environment. My favorite part is that the final step of the pipeline, where you can specify a when: manual option that requires a human to log into GitLab and click the “play” button that will trigger the deployment to production. This ability allows us to have accountability; at least two engineers need to sign off on the deployment and post approval in issue/feature/release discussion before someone clicks the button.

Granted, I’m sure you could configure some other CI/CD tools to achieve similar functionality as this. I’ve seen plenty of plugins for Jenkins that can make the UI look nicer, add manual actions, etc. However, what we love about GitLab is that it already has these features baked in. Alchemy engineers don’t want to spend time finding the right plugins, figuring out what is compatible with what and wasting time getting everything to play nicely together. We want it work right out of the box so we can start shipping code.

Furthermore, GitLab manages source control, issue tracking, CI/CD, and more all in one product. I’ve worked on teams that track issues in Trello, host code in GitHub, run tests with Jenkins, plan features in Aha!, and on and on. As an engineer, having half a dozen tools and logins to learn how to use and remember is just extra mental baggage I don’t want to have. These are all great tools and lots of amazing teams use them, but for our needs, GitLab provides a seamless experience that achieves everything we need with one tool. In my mind, there was a clear winner. Alchemy was now using GitLab.

Sounds great, but how?

If our situation sounds familiar and you want to learn how to run your own installation of GitLab, head over to Part 1 of our series, Architecture Overview of GitLab on AWS, to see a high-level overview of what you’re going to build.

Alchemy is always hiring great engineers! If you are excited about building great software with some of the world’s largest brands, email alois@alchemy.codes.

--

--

Alois Barreras

I don’t have many original ideas of my own, but I do a pretty good job of recognizing and using the ideas of others in innovative ways.