Learn about & Set Up CI/CD using BitBucket Pipelines

Mayowa O. Ojo
5 min readJan 18, 2024

--

BitBucket

Hello dear reader, welcome to another interesting piece in the cloud and DevOps world!

Now,

Question: What is a DevOps engineer without CI/CD?

Answer: Just a person making merges and manual deployments. That’s a CI/Could-Do-better Engineer!

A DevOps engineer should know their way around pipelines, and your interest in this article tells that you want to.

CI/CD is very important in DevOps because it helps automate code changes, running tests, and deploying software applications. Continuous integration means infusing a collaborative and automated approach into the software development process. Such that, for example, if people contributed to a shared repository, the pipelines as they are configured, would help with detecting integration issues, and inconsistency across environments, also giving immediate feedback on the deployment process. Continuous Deployment in turn involves automatically delivering code changes to test/production environments in a validated state.

Basically, Continuous Integration (CI) focuses on code integration and automated testing(build/test), while Continuous Deployment (CD) extends CI by focusing on the actual delivery of the code (release/deploy).

Let us take a look at a way of creating CI/CD pipelines using Bitbucket for this article.

To set up the pipelines for your code on Bitbucket, the first thing you will have to do is enable pipelines on the repo you intend to work on, on the Bitbucket site to ensure that any pipeline file you write will be run automatically when it is pushed there.

On the left side of your repo, click Repository settings, when you do, scroll down again on the same tab and click settings again, under the Pipelines menu. There, you will see the option to enable pipelines, toggle that, and voila, you just enabled your pipelines.

Enabling pipelines

Next up, you have to specify the different environments to which you will be deploying. Most organizations have 3 or more deployment environments. E.g:

(a). test environment: This is where developers test their changes first. Most feature branches are used for test environments.

(b). staging environment: This is a pre-production environment for testing changes before deploying to production. I like to think of it as where people like code testers, product managers, security teams, business stakeholders, and the like, review software behavior.

(c). production environment: This is the live environment where the application is accessible to the end-users or consumers. It is critical to ensure that the software or application is available, stable, and performs very well.

However, sometimes, custom environments are created depending on the needs of an application so, deployment environments are not limited to just those three. You can set up as many deployment environments are needed. If you have a very simple setup, you can use test and production and if you have a more complicated setup, you can decide to use as many test environments as possible.

# https://bitbucket-pipelines.prod.public.atl-paas.net/validator Validate yml file here
image: node:14

options:
docker: true

definitions:
services:
docker:
memory: 2048

pipelines:
default:
- step:
services:
- docker
script:
- npm install
- npm test
branches:
master:
- step:
name: Sonar
image: aneitayang/aws-cli:1.0
script:
- make sonar
- step:
name: build image
script:
- npm install
- npm test
- step:
name: deploy to dev
deployment: test
script:
- export ENV=dev
- make dev
- echo "Deploying to Dev environment"
- step:
name: deploy to staging
deployment: staging
trigger: manual
script:
- export ENV=staging
- make staging
- echo "Deploying to Staging environment"
- step:
name: deploy to production
deployment: production
trigger: manual
script:
- export ENV=live
- make deploy
- echo "Deploying to Production environment"
feature/*:
- step:
name: Sonar
image: aneitayang/aws-cli:1.0
script:
- make sonar
- step:
script:
- make push
- export ENV=dev
- make dev

To analyze each part of this pipeline file;

(1). The image and Options

image: aneitayang/aws-cli:1.0

options:
docker: true

For the image above above, I am using the Node.js 14 Docker image which is typically suitable for a Node.js application. Also, Docker is enabled with (options: docker: true) meaning that this pipeline will run in a Docker container.

(2). Service Definition

definitions:
services:
docker:
memory: 2048

I defined my Docker service to have a memory of (memory: 2048) MB to handle potential resource requirements.
(3). Pipelines Structure

pipelines:
default:
- step:
services:
- docker
script:
- make push
branches:
master:
- step:
name: Sonar
image: aneitayang/aws-cli:1.0
script:
- make sonar
- step:
name: build image
script:
- make push
- step:
name: deploy to dev
deployment: test
script:
- export ENV=dev
- make dev
- step:
name: deploy to staging
deployment: staging
trigger: manual
script:
- export ENV=staging
- make staging
- step:
name: deploy to production
deployment: production
trigger: manual
script:
- export ENV=live
- make deploy

The default pipeline runs on every push to any branch, meaning that if I push to either a master or feature/* branch, it will run.
For the master branch, there are multiple steps representing different stages:
• Sonar Step:
• Uses a custom Docker image (aneitayang/aws-cli:1.0) for SonarQube analysis.
• Executes the make sonar command.
• Build Image Step:
• Installs dependencies (npm install) and runs tests (npm test).
• The make dev command assumes there is a makefile in which my environment variables (export ENV=dev, export ENV=staging, export ENV=live) are and they will then be extracted for deployment to development, staging, and production respectively.
• The Deployment to staging and production is set to manual trigger, providing control over when these deployments occur. This means that you would have to go to your remote repo and its pipelines to activate deployments to these environments specifically.

In continuation,

(4). Feature Branches:

feature/*:
- step:
name: Sonar
image: aneitayang/aws-cli:1.0
script:
- make sonar
- step:
script:
- make push
- export ENV=dev
- make dev

• There’s a specific configuration for feature branches and this is signified by “feature/*. Recall that the asterisk (*) is a wildcard symbol and it is used here as a wildcard pattern to match branches with names starting with “feature/”. This is a pattern that allows you to define a set of pipeline steps that will be executed specifically for branches following this naming convention. e.g feature/add-new-code, feature/change-name, e.t.c.
• A SonarQube analysis is performed.
• Followed by deployment to the development environment (make dev).

Accessing Credentials for Different Environments.

It is important to note that credentials or environment variables may vary according to the different environments so you might have to specify/configure your environment variables and get their values from your secrets manager, in this case, I decided to use a makefile to extract the values of my variables into my deployment (I will be writing about makefiles soon, stay tuned). You can also refer to my article “Managing ENV Variables on Hashicorp Vault” to understand how you can pass the values from your secrets manager to your deployment environments.

Thank you for reading, I had a wonderful time writing this.

Kindly clap and follow me for more updates and articles, thank you again, dear reader!

The Cloud Fairy 🌩️🧚‍♀️👩‍💻

--

--

Mayowa O. Ojo
Mayowa O. Ojo

Written by Mayowa O. Ojo

DevOps Engr. || Alumna @ AltSchoolAfrica || Writer || Chef

No responses yet