Deploying Node.js with Spinnaker

Dennis Mårtensson
Greta.io
Published in
5 min readFeb 12, 2016

February 12, 2016

Despite being a small team here at Greta, we want and need to deliver a globally distributed system with very high performance. In this post we thought we’d share how we do this, and still spend the majority of our time talking to developers about their ideas of how we could make Greta the best distribution tool they could have wished for.

I’m gonna start with an overview of the systems we have deployed in order to give our users the fastest data distribution network possible.

There are two user facing systems that are interesting to look at.

  1. A normal REST API that is used to collect data and return meta data about users, so that we can connect them to the best peers for fast delivery.
  2. Our own signaling system that is built to set up webRTC connections as fast as possible, basically sending offers and answers to other peers. Before we decided to build our own signaling solution we tried for example PubNub and Pusher, but realised that it added to much latency for our particular use case. If you feel like trying our signaling API, have a look [here].(https://greta.io/documentation/signaling).

Both the REST API and the signaling system need to be globally distributed and highly available, and we also need to be very confident in their performance. For the same reasons we are also planning to use multiple cloud providers in the future.

Right now we’re running on GCE in tree different regions: US-CENTRAL1, ASIA-EAST1 and EUROPE-WEST1.

We’ve recently started using Spinnaker to make sure that we have consistent deployments in all the regions. By using Spinnaker we can also be sure that we’re able to deploy to multiple clouds with minimum effort.

To get started, follow the Spinnaker Hello Deployment guide. We made some changes to fit our needs, described here below.

As our API server is built in Node.js we needed to find a good way to build a deb package from the Node.js source. We did some research and decided on using pkgr.

To set this up we added three config files to our Node.js project; .pkgr.yml, postinstall.sh and Procfile.

Let’s start by adding a .pkgr.yml file that will be used by pkgr when it builds the deb package in the build stage in the Spinnaker pipeline. Here is the minimum file we use:

targets: ubuntu-14.04: runner: upstart-1.5 dependencies: - nodejs - npm default_dependencies: false after_install: "postinstall.sh"

Since we decided to use Spinnaker’s trusty we will tell pkgr to target ubuntu-14.04.

As we are using Node.js we will also tell pkgr that we have dependencies on nodejs and npm.

pkgr has a set of default_dependencies that we don’t need , so we’ll set them to false.

We also want to define a script to run after the install is done, so we’ll add after_install with the path to the script.

Let’s continue with postinstall.sh. To make the Node.js application run after the install we will add

sudo PACKAGE_NAME scale web=1

to the postinstall.sh file. When the package is installed it will let us run commands towards the package by calling PACKAGE_NAME command.

The scale command will register the process with the init system and thereby run it as a daemonized process. However, this only works if we have defined a Procfile with a command to run for web. We’ll do that, but first let’s have a look at Procfiles.

Procfile comes from Heroku and is a mechanism to define what commands your application will run. If you like to know more, have a look at the Heroku documentation.

Since this is a Node.js application we have declared how to start it in the package.json, so the Procfile file will just have

web: npm start

Those are the changes we’ve done to our Node.js project!

We have put together a Github repository that you can use to try this out. https://github.com/gretaio/spinnaker_nodejs_example

Now when we have a Node.js project that is setup to be built a deb package we can move on to setting up Jenkins.

Start by setting up Jenkins and a deb repo, follow the guide over at http://spinnaker.io/documentation/hello-spinnaker.html.

When you are done you’ll need to install pkgr on the Jenkins server so it can be used when we run Jenkins builds.

sudo apt-get update sudo apt-get install -y build-essential ruby1.9.1-full rubygems1.9.1 sudo gem install pkgr

That should be enough in most cases, but if it fails, have a look at the log and install the missing package.

That is all that needs to be changed on the server, but we’ll also have to change the Jenkins build job config some. Again, follow http://spinnaker.io/documentation/hello-spinnaker.html to setup the job and everything, but when you get to the Build Job config part you need to replace ./gradlew packDeb

This is where we will use pkgr to create the deb package. Replace ./gradlew packDeb with

rm PACKAGE_NAME* || true sudo pkgr package --name="PACKAGE_NAME" --architecture="amd64" --force-os="ubuntu-14.04" --auto --clean --verbose .

This should give you a Jenkins build conf looking like this:

This will make Jenkins use pkgr to build a deb package of your Node.js application. It will tell it to build it for amd64 architecure and force it to build for ubuntu-14.04 trusty. You can of course change them to what fits your deployment.

If you then continue to follow http://spinnaker.io/documentation/hello-spinnaker.html you will be deploying your Node.js application using Spinnaker in no time!

We will continue to explore Spinnaker and evolve our system along the way, and will keep sharing our learnings on this blog. If you have any questions or wanna discuss, feel free to join our public Slack channel.

Originally published at blog.greta.io on February 12, 2016.

Greta.io is dedicated to helping users increase site performance via an innovative approach to content delivery. We use machine learning to make content routing decisions in the client based on real-time network data. For every piece of content, Greta automatically decides if the content should be delivered over multi-CDN, peer-to-peer, client cache or an overlay network. As a result, customers experience shorter content load times, higher content quality and server offload during peak traffic.

--

--