Google Cloud Functions, Serverless framework and environment variables.

Yet another article about Serverless
google cloud and serverless

Yes, I know it looks quite trendy at this time to talk about Serverless architecture and the framework of the same name. Instead of spending time explaining to you what they are, I invite you to read the following articles describing what serverless architecture (https://martinfowler.com/articles/serverless.html) and what the Serverless framework (https://serverless.com/framework) are all about.

Our use case

Our company works with the Google Cloud platform. We are an international business built around a bunch of branded websites. Each brand operates in one or more countries.

All brands use the same core source code but with different features. All of these features can be activated/deactivated whenever we want. Some features are for all websites. Others are more specific to one website.

Technically, it means that you are building all your brands as soon as you are bringing a new feature to your core bundle. Even if your code is composed of multiple apps — such as what Frint, a framework that we use and maintain at Travix, proposes, whenever you merge something to your master branch you have to be sure that your core bundle will have all the necessary features available at any time after deployment.

Each build runs in a Docker image in Google Cloud Container Engine.

If you draw what I have just explained it looks more or less like this:

The overview

Now if you are thinking about how it’s maintained in terms of CI / CD, it does get quite tedious and expensive to perform multiple builds throughout the day. In the traditional approach, the applications would be your ancestors. When the build of one of the application happens, it triggers the build of the core website. After the core website is built, all other related builds for the sites are triggered out.

Another issue that you can have with such approach is about the speed of your websites. If you are making everything to be available at any time it means that you are loading lots of unused scripts. And that’s bad.

So how to build the right website when required without having to rebuild the whole thing?

An alternative approach

Such as what the title suggests we will explore how we could solve this problem using Google Cloud Functions. And because we want to be the best hipsters ever let’s bring in to it the Serverless framework for managing our functions and Google Cloud Pubsub.

Google Cloud Functions is the answer of Google Cloud regarding the Serverless architecture. The Serverless framework was briefly introduced at the beginning of the article. In our case, we will see a basic usage of the framework. However, the framework provides a complete set of commands such as rollback, delete and others which can be quite useful when you integrate a new service in your infrastructure.

Pubsub is a message pattern. It means Publish — Subscribe. All the messages are published to “topics”, which are in turn distributing them to the “subscribers”. Google Cloud Pubsub is one solution you can use for triggering your functions. In our case it is adapted because we need to be able to queue the requests and we cannot directly trigger the API (on-going builds).

The Pubsub API lets you decide if a message must be re-sent for a certain period of time. It is called acknowledgment deadline (ackDeadline). For managing your queue when a build is ongoing you can return an error for postponing the message X minutes later. Here we will use the default value, 80 seconds.

The aim is to be able to build ‘site A’ when ‘application A’ is built instead of rebuilding everything because only ‘site A’ uses ‘application A’ at this time.

We need to be able to keep track of the build status of ‘application A’. The application is built in a CI, so when it’s complete, it triggers the ‘core website’ build.

Instead of triggering the ‘core website’ directly, we send a Pubsub message if the build was successful.

The code blocks bellow assume you’re using a *nix OS and has NodeJS 6.11+ with NPM available in your path.

We will also assume that your CI has an API available for triggering builds and this API manages the authentication using HTTP basic authentication.

First step, install the “gcloud” binary. Here is a guide on how to do it: https://cloud.google.com/sdk/downloads

Create a service account from the Google Cloud console. It must have access to the following resources: Google Cloud Functions, Google Cloud Deployment Manager, Google Cloud Storage, Stackdriver Logging and Google Cloud KMS. Here also a link for installing it: https://serverless.com/framework/docs/providers/google/guide/credentials

Now let’s create our directory project and run the following commands in a terminal:

$> mkdir build-trigger
$> cd build-trigger
$> npm init
$> npm install --save serverless
$> $(npm bin)/serverless create --template google-nodejs --path success-handler

Here we installed the Serverless NPM module in our project “build-trigger”. The next step is to create the “success-handler” function using the template for the Google Cloud Functions.

Let’s move into the directory and install everything:

$> cd success-handler
$> npm install
$> npm install --save gcloud-kms-helper

Here we installed the dependencies for the function. We installed as well gcloud-kms-helper . gcloud-kms-helper is a small NPM package I released for encrypting / decrypting secrets using Google Cloud KMS. Go to the Google Cloud Console. Select “IAM & Admin > Encryption keys”. Create a new keyring then create a new key in this keyring.

Now go into your favourite IDE and open the “success-handler” directory.

Edit the “serverless.yaml” as follows:

Here we are using the variables capability of the Serverless framework. Awesome !

Your “env.js” file should look like that:

In this file, we are defining a function which returns an object. The returned object has all the environment variables you use in both your script and your configuration file.

When Serverless deploys your functions it copies the files specified in the “package” property into a zip file which is uploaded in the Google Cloud Storage Bucket. If the “include” property is not defined then everything not defined in “exclude” is uploaded.

Having an environment management solution allows you to deploy your functions in other clusters as well following your workflow.

Let’s now create a small script named “env-generator” in “success-handler/bin” which will generate for us the environment before the upload:

Now open your “package.json” file and add the following script:

{
// your package.json
"scripts": {
"postinstall": "./bin/env-generator"
  }
}

Each time you will run the command “npm install” it automatically generate the “env.json” file for you.

Let’s implement our function. Open the “index.js” file and perform the following modifications:

Our function expects a message with the following format:

{  
"targets": [
    "site-a"
]
}

If the message has the correct format then we decrypt our key and trigger our build using the CI API.

Now let’s deploy our function:

$> rm -rf node_modules && npm i
$> serverless deploy

At first we reinstall the NPM modules to make sure that we have a clean state and our environment config and secrets for communicating with the CI API are properly generated

The framework will take care of creating and uploading the zip file to the Google Cloud servers. Also it will create the Pubsub topic if required.

The next step is to synchronise your system for managing your website settings with your Pubsub topic. This way your build reacts to both builds of your applications and to any modification of your settings.

Let’s assume that you are building your website using a bash script. Let’s create a file named “pubsub.sh”:

Include this bash script in your main build script:

. pubsub.sh

When your build is done and successful just call the method “publish_event”

publish_event "{ target: [ \"site-a\" ]}"

It will publish a message to your topic which in turn will trigger your function.

I hope this article will help you build an even more exciting infrastructure using the Serverless framework. I plan to update to the module gcloud-kms-helper so that it can also handle the keyring and key’s creation automatically for you.

The entire template you can use to get started can be found here: https://github.com/jackTheRipper/gcf-template-env

Enjoy !