How To GIF Your Infrastructure Pipeline with Hava for Lifecycle Visibility

David Brenecki
Feb 27 · 8 min read

Hava is an automation tool that gives you visibility on your cloud infrastructure and network topology.

A recurring problem that I regularly see in the cloud space is how visibility over infrastructure seems to be ignored mostly.

Let’s say you had someone come into your organisation and create a few resources in different regions and they didn’t tell anyone about it. How long would it take before you became aware of these resources and cleaned them up?

From my experience, these resources are almost never found and in most cases, stick around for years unless the bill is large enough.

Another problem I have faced lately is where your team has a rapidly evolving infrastructure. You need regular diagrams that will track the constant changes because, without it, the task either gets thrown into the “too hard” basket, or it becomes a very time consuming and painful thing to explain later on.

This is what led to getting my team to include Hava into their infrastructure pipelines.

I wanted a tool that could generate diagrams on the fly. Surprisingly, it not only did that, but I now had more time during the day with fewer people coming up to me asking questions. My team had more power and visibility over what they were building. If it’s a repeatable task, then let’s automate it!

Since the tool generates the diagrams directly from my cloud account, it was using a reliable single source of truth. Each diagram also had monthly cost estimates for the infrastructure. I began to see the massive value add this would be to my team, and it even made me aware of a bunch of resources I had created in a different region months ago I wasn’t aware of.

Before I went any further into reading more about the tool to understand its full set of features, I wanted to get to use it in my pipeline.

As the saying goes

I hear and I forget, I see and I remember, I do and I understand

Here is my guide to creating a script to GIF your infrastructure pipeline!

Prerequisites

The rest of this article will show you how to easily integrate the Hava API into your infrastructure as code pipelines.

If you are starting from scratch, here are some helpful resources on getting started.

Hava API Getting Started

The first thing we are going to want to do is to synchronise our AWS Account with our Hava account using this curl request. This is a one time command to initially connect your account. For demo purposes, we have used AWS IAM credentials; however, Hava also supports providing access through a ReadOnly IAM role as a more refined approach.

Now let’s perform a GET request to find our source ID.

If you have multiple accounts connected, you will need to filter and select the correct source id with the account you want to pipeline. The json should look something like this with your source-id shown as “id”.

Now that we have the hava_source_id, we should create an S3 bucket to upload our GIFs and artefacts to so that they can be displayed in our Buildkite pipeline. Make sure that both your S3 Bucket and Buildkite agent have the required permissions.

The next step is to create a bash script gifinator.sh, that can be added into our pipeline. This script will download the images from Hava, combine them into a GIF, then archive them to our S3 bucket. The script has been broken up into sections to explain what each part is doing.

To start, we’ll need to set some environment variables in our Buildkite console which can be found under our Pipeline settings.

Here we are going to label our artefacts, and some simple cleanup steps in case past jobs failed.

In the next part of our script, we are kicking off our sync job to begin importing the new infrastructure that was just created. We have added in a sleep loop that checks the job status of the Hava import. Due to the asynchronous nature of REST, the job status will return a 303 when it has completed. What this signifies is that there is another resource that must be fetched (the images) and our job has completed.

Next, we are extracting the image URLs for all the images generated in our Hava account based on the source_id. The extra curl request is then used to extract the infrastructure type of each resource so we can name the images.

This is the loop that generates all the images from the URLs we gathered earlier; it will then combine all of the images with a 200ms delay between each to create our GIF.

At this point, we need to push our GIF to S3 as we need a URL for it so it can display in our pipeline using ANSI code. We then tarball the rest of our images as an archive and ship it off to our S3 bucket!

While your pipeline is running, you can now directly see in Buildkite what has changed in your infrastructure.

Once you’ve got the script created, you can easily plug it into your Buildkite pipeline which would look something like this.

That’s it! The final result should look something like this.

The full script can be found below.

WeAreServian

The Cloud and Data Professionals

David Brenecki

Written by

WeAreServian

The Cloud and Data Professionals

More From Medium

More from WeAreServian

More from WeAreServian

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade