Shipments at the dock

How High Seas has leveraged Docker to simplify deployments

Brandon Abbott
Hello High Seas

--

We are loosely tied to this new deployment pipeline — it will surely continue to evolve.

An emerging process

Over the course of the past year here at High Seas, we have built a lot of software. We have developed, maintained, and tested a multitude of websites and web applications — and in some cases, we have breathed new life into software built with old technology. Through all of these construction efforts new processes have emerged leading the team to become more efficient than before. As a part of this growth, we have now started to evaluate our DevOps pipeline — that is, we seek to answer the following question:

“How can we efficiently ship product increments so that they are quickly available to our clients and quality assurance team?”

In this article, we will look at how to use Node.js, Docker, BitBucket Pipelines, and Azure to set up an efficient deployment pipeline in the form of a tutorial. As a disclaimer, many of these code samples should be considered a work-in-progress and should not necessarily be used in a production sense. This article assumes that you have at least some experience working with Node, Express, and Azure and that you’re familiar with continuous integration concepts.

1 — Node.js express server

This node server is about as straightforward as they come. In this case, we’re starting a node server on port 3000, or whichever port has been configured through environment variables. For the moment, all requests are just serving an index.html file.

As a side, this server can be started from our package.json file with a simple yarn start.

2 — Node + Azure without docker

Before we get into Docker, I want to take a moment to consider how one might publish this Node server to Azure without Docker. Getting this code shipped might look something like this:

  • A developer pushes code up into a repository such as BitBucket or Github
  • Continuous integration server is automatically triggered to run a build — bundling the code and running tests
  • If everything checks out, an artifact is assembled as a .zip — Since we’re working with Azure and a Node server (i.e. not a static single-page app), this file is quite large as it must contain the node_modules directory.
  • Depending on which branch (or tag) you’re working with, the zipped artifact is then published to Azure.

The final step of this process comes with some fairly serious pros and cons.

A good reason to do zip publishing is because this process will evaluate all of the files on the remote server and the files contained within the .zip and then synchronize only that which is necessary — extraneous files are deleted, new/updated files are created, and existing files ignored. Moreover the size of a .zip file is smaller than the original set of files so this process is at least better than trying to use FTP! </shivers>

However, there exist a few reasons as to why Zip Deployment with Azure (at least with Node.js) is a bad idea.

The first reason is because with Azure, the Express server is wrapped by IIS. With this sort of configuration you must also publish a web.config file (like this one) to have Express do the routing rather than Azure. In rare edge cases we saw issues and errors pop up with redirects and routing. That is, all of our routing and redirects worked fine locally, however once wrapped by IIS and that pesky web.config file, we started to see 500 errors. Debugging this configuration actually proved to be impossible as we found the web.config used for Azure and Node was not compatible with a locally running instance of IIS.

The other reasons not to do zip deployment with azure and a Node/Express site lie within the compilation of node_modules. That is, once you do an npm install (we prefer just yarn here at High Seas), the generated node_modules folder is quite large. In fact, we found it became so large that an initial publish had to be done manually (via ftp) and occasionally publishes during regular operations would just fail. To make matters worse, you must also consider the machine you’re assembling node_modules on and which version of node you’re using and that it lines up with the Azure App Service on which the artifact will be running.

An alternative solution we considered was to use Azure’s Continuous Deployment which makes use of a kudu.cmd file to build node_modules remotely on the App Service as a part of the application start-up. This also proved to be problematic, as the .cmd file was brittle and we found it to be hard-to-work with. Why bother trying to become an expert at ugly looking powershell commands when there are so many GUI CI/CD services each with a suite of wonderful integrations?

3 — Building and managing docker images

What’s the easy way to deploy a Node.js Express server to Azure?

Let’s dive right in to Docker.

The first thing you’ll want to realize about Docker is that it is first a tool for creating artifacts.

The script specified in this step is called a Dockerfile and is used when we want to take a snapshot of the app. These snapshots in Docker are called images. I like to think of these images as a bundled-up, ready-to-go snapshot of the application that contains everything you need to get things running. Looking at the Dockerfile specified in this step, you can see it runs the same command we might have ran to start the node server ourselves such as yarn install andnpm start.

To offer a concrete example, cd into a directory with a Dockerfile and run the following command to build an image.

docker build -t MyCoolNodeApp .

This will create an image called MyCoolNodeApp. To see a list of all of the images you’ve created simply type docker images. When working with images, you’ll want to pay attention to the image’s ID which will be a string of numbers and letters such as 55137b4b017d. To remove an image you created, you can do so easily with docker rmi 551 — you don’t have to type the whole image ID out in full, just enough to be unique among other image IDs. The rmi stands for remove image.

An important thing to note here is that images can be created from (or on-top of) other images. That is, take a look at line 1, this application is built using an image that comes with node:9 already installed.

4 — Running and managing Docker images

Once I started to run docker images, I felt like the world was my oyster!

Alright, so now we have a snapshot, or rather an image of our node/express application.

Certainly we could get this application running with a simple npm start, however, it is good practice to be able to run software locally in the same way that it will be run on remote environments.

Instead, let’s go with…

docker run -p 3000:3000 MyCoolNodeApp

This will run the image. Feel free to navigate to localhost:3000 to see your application! Before I explain the 3000:3000 part, let’s run another few commands in another window.

docker ps

This will show us a list of all of our running images — its very much like the unix command ps.

Unfortunately, docker doesn’t simply just run the image you specify. It puts it into what’s known as a container. To see a list of all containers try the following command:

docker container ls

Finally, to stop the running image you can just ctrl+c the image, or to do it cleanly, perform a docker stop 41f (where 41f is the first few characters of the image ID).

As promised, the 3000:3000 part has to do with the ports both inside and outside of the container. Since images are run from inside a container, the container then is able to map ports that are exposed by the image to ports exposed outside of the container. Here’s a docker cheat sheet with a bunch of commands that explains the ports a bit better.

5 — Publishing docker images

Now that we can build and run docker images, let’s talk about publishing them. The first step is to get yourself an account on hub.docker.com. Next, back in the terminal authenticate yourself with docker login. You can pull images to your computer with docker pull [owner-name]/[image-name] and you can push with docker push [owner-name][image-name].

It really is just that simple!

6 — Getting Bitbucket pipelines to do it for you

At this point, you should feel comfortable building, running, and publishing images to either your own account or to your organization’s docker account — ours is hellohighseas. The next step in this process is to get BitBucket pipelines to do the publishing automatically.

Notice in this pipeline configuration, we’re doing the exact same steps as earlier! Most notably we start out with docker build, authenticate with docker login and then docker push.

In this case, we have any commit made on the qa branch set up to go to hellohighseas/my-node-app:qa. The :qa part here is just a label to help disambiguate various builds of the same image. We could also set up a :pre-prod label for git tags, but I have omitted that part here for brevity.

7 — Connecting Azure and Docker

  • Start to create an Azure App Service
  • When you’re creating it, you’ll need to create an App Service Plan that is running Linux.
The instance suffixed by QA gets mapped to the docker image labeled QA
  • Specify that you want the AppServicePlan in a docker container

Next up, navigate to your new azure instance and go to container settings. Show yourself the webhook URL and copy it to your clipboard.

With the webhook url at the ready, head back over to hub.docker.com and add a new webhook.

Conclusion

There is still quite a lot more to learn about Docker such as how to properly make use of docker-compose or how to run tests or how to look at the logs. It is my hope that this article might help someone get their feet wet with Docker. I also hope that this article finds someone out who ran into the same problems we did with Node + Azure. As High Seas sails into new waters, we hope to offer more insight to the community at large. If you have any questions, suggestions or feedback please do leave them in the comments. This process is still evolving and we are a group of people who embrace change!

If you are in need of development, design, branding, marketing, strategy, or just want to chat, please reach out via our website: High Seas Consulting.

--

--