Level Up Your Development Workflow with Continuous Delivery

Karen White
BigCommerce Developer Blog
12 min readFeb 20, 2019

September 13, 2019 — We’ve now published CD Pipelines Part 2: How to Add Performance Testing and Utility Scripts to Your Deployment. Once you’ve mastered the basics of continuous integration, head over to learn how to add performance testing with Google’s Lighthouse tool and run a cleanup script to make room for your theme deployment.

Software teams today move quickly, automating the testing and deployment process to get changes into production as they occur. Instead of deploying changes in periodic major releases, small changes are deployed and tested automatically, speeding up the time to delivery and getting software into the hands of users right away. That’s essential for incorporating user feedback into your next development cycle.

A continuous delivery pipeline is the workflow that automates moving code changes to production, and it’s part of a larger set of principles about how software teams manage updates. Based on Agile software development and DevOps, continuous integration, continuous delivery, and continuous deployment all represent practices that help organizations achieve lightweight and flexible development.

If you’re curious about how continuous delivery could help your team move quickly and deliver new features with less risk, keep reading. We’ll go through the core principles of continuous delivery and discuss how to set up a Bitbucket pipeline to automate theme releases using Stencil CLI. Finally, we’ll examine how BigCommerce Partner agency The Zaneray Group puts continuous delivery into practice to manage deployment at scale.

Continuous Delivery Key Concepts

Continuous delivery introduces automation into the process of deploying code changes from version control to your hosting platform, usually with the added step of testing. You can build and push to your hosting platform when merging a feature branch or after individual commits, but to maintain the advantages of continuous delivery, it’s best to build often and push changes to staging on a regular basis.

You may have come across a few other related terms which are similar, but represent different practices on the delivery spectrum: continuous integration and continuous deployment. We’ll compare each of these terms to characterize the differences.

Continuous Integration vs Continuous Delivery vs Continuous Deployment

Continuous integration is the practice of frequently merging changes from local development into your code base. The goal is to prevent untested changes from building up; ideally, when changes are merged, the software should be built to make sure that everything works as expected.

Continuous delivery takes that a step further by automating the process of deployment to either a staging or production environment, usually incorporating automated testing. With continuous delivery, you may still manually run some processes within your pipeline.

Continuous deployment automates deployment to production, fully end to end. Where continuous delivery might automate some processes but take a more manual approach to production deployments, continuous deployment fully automates the entire process.

Pipeline Stages

A typical continuous delivery pipeline manages code changes as they move from a version control system to the production server. Let’s outline the key touchpoints along the delivery pipeline:

  1. Version Control.
    Many teams manage their code bases through version control software based on Git, which is an open source distributed version control system. There are a number of code management tools out there based on Git workflows, like GitHub, Bitbucket, and GitLab.
    There are two features that have made Git the dominant version control system: the ability to clone, or pull down, a local working copy of a repository, and ability to create branches within a code base. Branches allow you to isolate and test your changes on a separate area of your repository, allowing the master branch to represent the record of truth for your code base. When you’re satisfied with changes made to a feature branch, it can be merged back in with the master.
    Version control systems also allow you to see a complete history of all of the changes that have been made to a repository. This makes it easy to run diff checks and, if necessary, roll back changes that have caused problems.
  2. Staging.
    A staging environment mirrors the environment that your code will run in when it’s fully live, but it’s isolated from the live environment so you can run tests. In the context of building a website, your staging environment would look like a copy of the live site; if you were building an application, your staging environment would be a place where you can push a test build of your application to your host server.
  3. Production.
    The production environment is the live version of your website and app. Crucially, it’s where your users will actually be interfacing with your code — so it’s important to QA any changes before they reach this point.

Pipeline Automation

Continuous delivery pipelines automate deployments by executing a YAML file that tells the pipeline what environment to run the build in and which steps to take to deploy the application. In this blog post, we’ll focus on CD pipeline automation in Bitbucket, but the principles carry over to pipeline setups in other code management systems.

Bitbucket pipelines run builds within Docker containers, which are defined by Docker images. As described in the Docker documentation:

“An image is an executable package that includes everything needed to run an application — the code, a runtime, libraries, environment variables, and configuration files.”

At runtime, Docker images generate an instance of a container, which is a lightweight environment in which to run an application. The advantage of containerization is that it gives you a portable and consistent environment that can be used to run applications. It’s similar to a VM, but with less overhead.

Let’s look at a simple example of a Bitbucket build file (bitbucket-pipelines.yml) which uses the default Bitbucket Docker image configured to run Node.js:

image: node:7.9.0
pipelines:
default:
- step:
script:
- node -v

At the top of the file, we specify the version of Node.js to run inside the Docker image. The build script accepts a sequence of bash commands to execute at runtime. In this case, we’re simply running a command to check the Node version.

Continuous Delivery with Stencil CLI

Let’s review what we’ve learned so far and put the pieces together by setting up a simple continuous delivery pipeline in Bitbucket. The repository will contain a Stencil theme, and we’ll create a pipeline script that uses Stencil CLI to automatically build and push a copy of the theme to a sandbox store when changes are merged to the master branch.

Note: Bitbucket’s free plan allows you to create private repositories and includes 50 minutes of pipeline build time per month.

  1. Create a new repository in Bitbucket and upload your theme files to it. If you’re working with a custom theme, connect your existing local repository to Bitbucket. If you want to start with a new copy of Cornerstone, you can choose the Import option when setting up your repository and clone Cornerstone directly from GitHub:
    https://github.com/bigcommerce/cornerstone
  2. Create a .stencil file in your Bitbucket repo and fill in values for normalStoreURL, port, clientID, and accessToken:
{
"normalStoreUrl": "The fully qualified URL for your staging store",
"port": 3000,
"clientId": "Your Client ID",
"accessToken": "Your API token",
"customLayouts": {
"brand": {},
"category": {},
"page": {},
"product": {}
}
}

For details on generating API tokens, see Getting API Credentials. Once you create the .stencil file, it’s important to keep your Bitbucket repo set to private. Otherwise, your API tokens would be exposed.

3. In your repository settings, navigate to the Pipelines tab. Choose JavaScript as your language template. This step selects a default docker image preloaded with common JavaScript utilities like npm.

4. Configure your pipeline build file in the editor. Try the following to create a simple build file that installs npm dependencies, installs Stencil CLI as a global package, and then executes the stencil push command. The -a flag on Stencil push automatically applies the Light version of the theme and activates it on the storefront without additional command prompts.

# This is a sample build configuration for JavaScript.
# Check our guides at https://confluence.atlassian.com/x/14UWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: node:6.9.4
pipelines:
default:
- step:
caches:
- node
script: # Modify the commands below to build your repository.
- npm install
- npm install -g @bigcommerce/stencil-cli
- stencil push -a Light

5. Commit a change to the master branch to run the pipeline and upload a new theme to your staging store.

Using a Custom Docker Image

Our pipeline works! But now, let’s see if we can make it more efficient. Every time we run the pipeline, a new container is created. If we defined multiple steps in the bitbucket-pipelines.yml file (you can define up to 10) each of those steps creates a new container from the Docker image at runtime. You might have noticed that means that every time we run the pipeline, we install Stencil CLI again, which slows down the build.

To streamline, we can create our own custom Docker image and install Stencil CLI at the image level. That way, every container generated by the image will have Stencil CLI globally installed already.

An image is defined by a Dockerfile. To create a custom image, we’ll create a Dockerfile that uses one of the official Node Docker images as a parent and additionally installs the Stencil CLI npm package. Then, we’ll push the image to Docker Hub as a public repository and require it in the Bitbucket build file. Images used in Bitbucket pipelines can be hosted in public or private registries, on Docker Hub or on another container registry. To review all of the options for using custom images as build environments, see Atlassian’s documentation.

  1. Install Docker and create a Docker ID and password. Start the Docker desktop application and sign in with your ID and password.
  2. Create a new folder on your local machine and navigate into the folder from your terminal.
  3. Create a new file in your text editor and save it as Dockerfile (no extension). Paste the following into the contents:
# Use an official Node runtime as a parent image
FROM node:8-jessie
# Install additional package (Stencil CLI)
RUN npm -g config set user root
RUN npm install -g @bigcommerce/stencil-cli

We add RUN npm -g config set user root to avoid permissions issues when installing a global package and allow npm to install binaries owned by the root user. (Shoutout to Aleksandr Guidrevitch for his helpful troubleshooting article!)

4. Build the Docker image by running this command from the directory that contains your Dockerfile:
docker build -t yourdockerusername/imagename .

Imagename can be anything you’d like.

5. In your Bitbucket repository, update bitbucket-pipelines.yml to require the custom Docker image instead of Bitbucket’s default, and remove the Stencil CLI installation step from the build script:

# This is a sample build configuration for JavaScript.
# Check our guides at https://confluence.atlassian.com/x/14UWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: yourdockerusername/imagename
pipelines:
default:
- step:
caches:
- node
script: # Modify the commands below to build your repository.
- npm install
- stencil push -a Light

Managing dependencies in Dockerfiles

The decision to move a dependency out of the container and into an image will depend on a few factors:

  • Whether the dependency is needed by other processes running outside of the container.
  • Whether the dependency updates frequently
  • Whether the dependency is public or private. (You wouldn’t want to put a private dependency or tool in a publicly hosted Docker image).

Another consideration is image size. For instance, the npm install command pulls in a lot of libraries and can inflate the image. To really keep the Docker image slim and optimize the build, we could move the Node modules into their own image, which can be imported into the main Docker image. This is one strategy for managing npm packages; another is to use volumes. Volumes are a way to store files so that they persist across containers. Instead of creating the files needed by a container at runtime, you can store the data outside the writable container layer, and that data is available even as new containers are created and stopped. For more recommendations on writing efficient Dockerfiles, see Best practices.

The Zaneray Group: Continuous Delivery at Scale

So far, we’ve built a simple continuous delivery pipeline to manage code changes for a single repository and a single staging environment, but putting continuous delivery into practice for Enterprise businesses adds additional complexity. To find out more about how the Zaneray Group manages a pipeline workflow at scale, I sat down with Dean Hamilton, Lead Software Engineer, to discuss how his agency executes continuous delivery for Skullcandy.

For Zaneray, the main advantages of continuous delivery were staging environments that were always up to date with the latest features and the ability to automate deployment, leaving team members free to focus on more important tasks.

“Our guys could be focused on writing code and testing their latest changes, versus worrying about whether it got deployed to the sandbox so the customer can look at it. It just was there.”

Zaneray uses Bitbucket to manage the continuous delivery pipeline for Skullcandy, a brand with multiple regional storefronts catering to markets across North America and Europe. Each regional production storefront is twinned to a staging storefront, and Dean spoke to how he’s addressed the challenge of keeping the data between the two environments in sync.

“One of the advantages we had with this particular project with Skullcandy is that we were working with the Jasper PIM, which took care of some of the data concerns. The latest and greatest product and content-related data changes were flowing out to the sandbox instances automatically. We had a single source for data, and it also gave us more control coordinating the release of data from the PIM to production at the same time as the code releases that supported it.”

Dean describes the overall architecture of Zaneray’s pipeline setup:

“We have a master branch that we use for deployments to production, and we have a main development branch, that we call staging. Anytime anything is checked into that development branch, that’s where continuous integration happens, where we deploy automatically to all our sandbox instances.”

Production deployments, on the other hand, are handled manually and deliberately. “Anytime we do a production deployment, we merge staging in with the master. And then we can go into Bitbucket and run the pipeline. There are various pipelines: you can choose to deploy to all the BigCommerce instances, or an operational subset, like the European stores.”

When the pipeline runs, it automates cleanup and tagging as part of the process. “It tags the branch with a timestamp so that we know exactly when this was associated with a production deployment and the deployment actually names the theme. So when you go into the the theme management area of the BigCommerce admin, the active theme would be named accordingly.”

Before pushing a new theme to the store, the pipeline also runs a script that checks the total number of themes currently uploaded to the store, and if the total is the max of 20 themes, deletes the oldest. “That was actually really easy to do, since there is an npm package out there we can just install and then I wrote a script; it was only a couple hours to get that to happen” Dean says.

One of the challenges Dean faced when configuring the pipeline was that at the time, there was no -a flag on the stencil push command to automatically activate a theme version. Running stencil push would always present the user with a prompt for input before activating the theme. Dean recounts, “We came back and asked if there was a chance that we could have some kind of command line switch for that and quite honestly based on experience with other vendors we were not very hopeful that anything would happen. But then Nikita Puzanenko, who’s a Senior Product Support Engineer at BigCommerce, banged that out in an afternoon and we had a zip file for a special Stencil CLI version — — that kind of made all the difference in the world.” Now, Nikita’s updates have been merged with the core version of Stencil CLI, making it possible to run the stencil push command as part of an automated pipeline.

Any last tips for other developers setting up their own pipelines?

“First, I think you need to figure out your branching strategy. And then, you need to decide what’s going to cause a deployment to your sandbox instances and make sure that that’s going to be relatively stable. You don’t want to have a situation where you’re deploying code overzealously and introducing bugs to your staging environment. Once you have that, you can start thinking about fallback strategy with tagging of release deployments. You need to have a strategy to manage your API keys so that you can do your deployments for each instance. That needs to be a part of your repository so you can use the appropriate keys and make that part of your deployment. After that, you need to make sure you have room to do your deployment. That’s where the script to remove the oldest theme comes in, and then really, it’s just a simple push at that point.”

Summary

Automating your development workflow with a continuous delivery pipeline allows you to accelerate software delivery cycles and manage code updates efficiently. In this post, we’ve seen how to set up a simple Bitbucket pipeline to automatically push theme updates to a staging environment using Stencil CLI and heard from the Zaneray Group, who described how their team uses continuous delivery in their development workflow.

Huge thanks to Dean Hamilton at The Zaneray Group for sharing his experience and expertise! Be sure to check out Dean’s work at www.zaneray.com.

Let us know what you think about this post! Comment below, or send us your questions by tweeting @BigCommerceDevs.

--

--

Karen White
BigCommerce Developer Blog

Developer Marketing at Atlassian. Formerly at BigCommerce, Rasa. I think Voyager was the best Star Trek.