SAP on Google Cloud: HANA HDI containers and CI/CD pipelines pt .2

Lucía Subatin
Google Cloud - Community
6 min readJul 13, 2020

We built an application with managed containers using Node.js and Golang for the frontend and backend modules and HDI containers on SAP HANA. We defined a CI/CD pipeline to keep them consistent.

Front end, backend and HANA Access Layer micro-services connecting to SAP HANA and calling the Translate API

In this blog post, we’ll explain what the CI/CD pipeline looks like and how it incorporated the concept of HDI containers. This follows part 1.

The application

As you see in the boxes inside of boxes above, our application has three micro-services:

  • A frontend, a web application written in Node.js
  • A backend application, written in Golang, that also talks to the translate API
  • A database access layer responsible for talking to SAP HANA using the Node.js client for SAP HANA
Google Cloud shell explorer with the code for the micro-services

You can create this app yourself, hopefully for free and without swiping a credit card. We published this Qwiklab that will spin up a HANA Express machine on Google Cloud for you while you complete the lab. The initial free Qwiklab credits should be enough to run this lab. I’d recommend you clone the app into your own GitHub so you can use it afterwards.

The pipeline

CI/CD pipeline with Git, container registry, cloud build and Google Cloud Run

I think the pipeline is better understood when in action. So let’s say we are a group of developers working on the different microservices that comprise our application.

Here’s how the tooling overlords would govern our day (or how we govern them, we’ll see…)

Shared Git Repository

We chose Google Cloud Source as a private git repository. This repo and its master branch were “born” with the application.

Google Cloud Source repositories showing git branches and a history that shows everyone struggles with yaml

You know what else was born with the first deployment? An HDI container!

Creating the HDI container

Just like we have a branch that acts as the main branch, we’ll have an HDI container that will act as the reference to all and will be loaded with test data.

Here’s an example of how this first container was created using the hana-cli.

Here’s the asciinema if you want to follow along/ copy and paste.

The default-env.json file now contains credentials to this container. The Node.js library hdi-deploy will look for a file with this name or the environment variable VCAP_SERVICES to connect to SAP HANA and deploy our new tables and other artifacts into the HDI container.

We will use the contents in this file to create an environment variable called VCAP_SERVICES later.

default-env.json file with connection details for the HDI container (user, password, host, port)

Starting a change

For the sake of simplicity, let’s imagine we need to add an additional column to the existin gtable in the HDI container.

I’ll make sure I pull all the changes from the main branch. This will bring the latest artifacts that have been deployed into the main HDI container too.

git fetch and git pull commands

Finally, I’ll create my own branch:

git checkout -b unicorn_branch

Just like I have created my own branch, I will create my own version of an HDI container using the deployment files that I just pulled from the main branch.

This time, I will append a marker (_LS in the example below) to the name of my container, so that it does not conflict with the master one (Web IDE does this automagically).

hana-cli connect -shana-cli createContainer -c RUN_LS -e -s

Now I have the new credentials for this container in a new .json file. I will go ahead and make a change to the table I pulled:

Using the new credentials in the default-env.json file, the ones for my RUN_LS container, I will deploy this change. The default-env.json file should be in the /db folder, from where we run these commands:

npm install
npm start

This calls the hdi-deploy module so it can create and HDI container and create a schema with my tables.

npm start and npm install

Since our credentials are not meant to leave our local environment, let’s make sure the .gitignore file includes these default.env*.json files.

And now let’s commit+push before each coffee break:

We should now see the change in my Git branch in Google Cloud Source:

Merging a change

We have finished making some changes and doing some quick tests on our own container. We have committed those changes (multiple times) into our own branch. Now it’s time to submit them into the main branch.

Just in case the other developers working on this are also making changes, I am also pulling the branch again.

git checkout master, git pull, git merge unicorn_branch, git push

Asciinema here.

And after we have merged the changes, the main branch is now showing them… as expected:

HANA table in main / master git branch with the change merged from local HDI container

Automated deployment and testing

One of the key ingredients of our “deploy often, fail fast” recipe is automation. When do we want it? As soon as we push into the main branch. Yes, like we just did.

We want to make sure everything that used to work still works but we need the changes deployed into the main container first.

We also want to make sure that our new tables and artifacts are tested when someone else deploys. This means that our new tables also need to be incorporated into the automated tests, so we’ll build those into the pipeline too.

We’ll need some SAP HANA specific tools: hana-cli, hdbsql and the HANA client for Node.js if we want to do some tests like the ones documented here.

We can pre-install all of these in a Docker container, and have that container do the deployment and the testing. We’ll cover this in a future blog post, but here is a sneak peek of what this looks like:

The tool we are using to coordinate this is Cloud Build. We are using a trigger to start the build and test process every time there is a push into the main branch:

Using the docker container and injecting the VCAP_SERVICES as an environment variable, so that the Node.js deployer can do its magic with the main HDI container, RUN:

There is a better way to pass these credentials, using secrets. But we’ll keep that one for the upcoming blog posts.

For details on how to build the pipeline for the rest of the micro-services and some tweaks towards using this in a productive environment, here is part 3.

Lucia Subatin and Fatima Silveira.

--

--

Lucía Subatin
Google Cloud - Community

Solutions Architect at Google Cloud. Computers, chocolate and cats (in any order).