Level 3+: Kubernetes and Chef Habitat on Azure

David Justice
Azure Developers
Published in
8 min readJun 1, 2017

For those of you that have been following my DevOps on Azure progression series, you will know this topic is a bit out of order in the series. I was intending on covering container based immutable infrastructure a bit later in the series, but after presenting Habitat packages deploying to Kubernetes on Azure Container Service at ChefConf 2017 it seems like a great opportunity to pen this content while it’s still fresh in mind.

Here’s a link to the begining of the series if you’d like to catch up a bit on the previous posts. If not, don’t worry, this post will be self contained and you will no worse for not having reviewed the previous posts.

What are we going to do today?

Rails + Ember Todo List

In this post we are going to deploy a Rails + Ember + CosmosDB application to Kubernetes using Azure Container Service and Chef Habitat. We’ll provision an Azure Container Registry, Kubernetes cluster, and CosmosDB instance in MongoDB mode. After provisioning the infrastructure, we’ll then package our Rails application with Chef Habitat and export a docker image. We’ll push the docker image to our private registry, and finally, deploy our application in our Kubernetes cluster.

All of the code for this project can be found in Github in the following repository.

Video of the demo if you like that sort of thing

Prerequisites for running the demo

Project Structure

If you clone the devigned/hab-rails-todo locally, you will see the following project structure.

$ tree . -L 1
.
├── LICENSE
├── README.md
├── azure
├── demo.sh
├── habitat
└── src

The ./azure directory contains provision.sh, which builds the Azure infrastructure. The ./habitat directory contains the Chef Habitat related files. The ./src directory contains the source for the Rails todo application we’ll be deploying. Finally, ./demo.sh contains a rough script you could follow on your own to replicate the steps described in this post.

In the rest of the post, it is assumed all commands execute from the root of the devigned/hab-rails-todo repo.

Provisioning Azure Infrastructure

Let’s first go through the infrastructure provisioning script. You can execute the provisioning script by running ./azure/provision.sh.

./azure/provision.sh

First thing to take away from this script, is that it is designed using the Azure CLI to reach a goal state, and can be re-run any number of times.

  • Lines 5–11: Setup some variables to be used in the rest of the script
  • Lines 18–23: Create our resource group if one doesn’t already exist
  • Lines 25–31: Create a local public and private key pair for cluster authentication
  • Lines 33–39: Create the CosmosDB instance in MongoDB mode
  • Lines 41–48: Create the Azure Container Service instance in Kubernetes mode with 2 agents for running containers
  • Lines 50–58: Create the Azure Container Registry
  • Lines 60–67: Login to the newly created container registry using the generated credentials
  • Lines 69–75: Fetch Kubernetes credentials and store them in ~/.kube/config in preparation for using kubectl
  • Lines 77–80: If kubectl is not in the path, install it
  • Lines 82–83: Create the Habitat ./habitat/default.toml with default configuration

At the completion of this script, we’ll be ready to deploy container images to Kubernetes using our private container registry.

Ok… so now I have all of the Azure infrastructure setup, how are we going to package and deploy our Rails application onto my Kubernetes cluster? We are going to use Chef Habitat!

What is Chef Habitat?

Chef Habitat is a packaging, deployment and application management tool, which provides an abstraction over then environment the application is to be run within (virtual machines, containers, bare metal, etc.). It allows you, the developer, to concentrate on less on environmental configuration, process supervision and deployment details and more on delivering delight.

As an aside, Habitat is written in Rust, and is incredibly snappy. Take a look at the rudimentary comparison to kubectl below. Habitat is an order of magnitude quicker.

$ time hab -h 1>/dev/nullreal 0m0.011s
user 0m0.004s
sys 0m0.003s
$ time kubectl -h 1>/dev/nullreal 0m0.131s
user 0m0.108s
sys 0m0.019s

Ok… so how do I package this Rails application?

hab studio enter
build
hab export pkg docker devigned/rails-todo
  • hab studio enter creates and opens a Habitat clean room (Docker container) for building your application. It provides an environment with all of the tools you need to build and package your application.
  • build builds the Habitat package, a signed .hart file, which is really just a tar.gz
  • hab export pkg docker devigned/rails-todo exports the .hart package to a docker image.

After exiting out of Habitat studio, and running docker image --format "table {{ .Repository }}\t{{ .Tag }}", you should see the following.

$ docker images --format "table {{ .Repository }}\t{{ .Tag }}"
REPOSITORY TAG
devigned/rails-todo 0.1.0-20170531150240
devigned/rails-todo latest
habitat-docker-registry.bintray.io/studio 0.24.1

Wat!? Habitat just built me a Docker image. Dope!

That seems too easy… What just happened?

Remember the ./habitat directory in the root of the repo? Well, that directory holds the details of how we will package the Rails application in ./src. Here’s a listing of the ./habitat directory.

$ tree ./habitat/
./habitat/
├── config
│ ├── mongoid.yml
│ └── secrets.yml
├── default.toml
├── hooks
│ ├── init
│ └── run
└── plan.sh

The main file is the ./habitat/plan.sh, which describes the packages the application depends upon (lines 8–21), what ports the application exposes (lines 22–23), how to build and install the application (lines 40–53) as well as other pertinent pieces of metadata about the application.

./habitat/plan.sh

The config directory contains handlebars templated configuration files, which will have values replaced based on values in the ./habitat/config/default.toml file. For example, ./habitat/config/mongoid.yml will have the {{cfg.mongodb_uri}} (line 4) replaced with some_connection_string from ./habitat/config/default.toml (line 2). Similarly, ./habitat/config/secrets.yml will have configuration values injected.

./habitat/config/mongoid.yml
./habitat/default.toml

The ./habitat/hooks directory contains scripts, which will be run by the Habitat supervisor at different lifecycles of the application. For this application, we’ll only be using two of the hooks, init and run. There are multiple other hooks (file_updated, health_check, reload, suitability, reconfigure, etc.). You can find more information about hooks in the Habitat documentation.

The init hook runs when a Habitat topology starts. As you can see from the following gist, the init hook moves our application’s files into the running directory the application, /hab/svc/$pkg_name/staic.

./habitat/hooks/init

The run hook runs when one of the following conditions occurs:

  • The main topology starts, after the init hook has been called
  • When a package is updated, after the init hook has been called
  • When the package config changes, after the init hook has been called, but before a reconfigure hook is called

As you can see from the following gist, the run hook sets up our Rails environment variables, and kicks off the Rails server bound to the IP and port defined in our Habitat configuration default source, ./habitat/default.toml.

./habitat/hooks/run

Ok… we’ve deep dived enough into our Habitat configuration and packaging. Let’s deploy this application onto our Kubernetes cluster.

Deploying to Kubernetes on Azure

At this point, we have a Docker image container our application, and we have our Azure infrastructure provisioned. We just need to tag and push our image to our private repository, and then ask Kubernetes to run our image.

To push the image to our private registry, run the following commands.

push image to private repo

Once the image is pushed to the private registry, we are ready to deploy the image to our Kubernetes cluster.

deploy to Kubernetes

The above set of commands will deploy, load balance and scale out your Chef Habitat packaged Rails todo application on Azure Container Service running Kubernetes!

Leveling up your DevOps

Each of the posts in this progression series have focused on leveling up your DevOps practices (mostly focused on deployment and provisioning). In this post we have described an example of a container based immutable infrastructure deployment using Chef Habitat, Kubernetes and Docker, but we haven’t talked about the why. Why is this a progression from previous models (handcrafted pets, infrastructure as code, etc.) a progression in DevOps maturity?

As systems grow in complexity it becomes increasingly difficult to reason about the state of a system. In an already complex system, additional complexity is accumulated when you must account for the mutating state of each individual component of the system to fully understand of the state of the entire system. When running “pets” (machines with mutating state with long lifespans), we need to constantly be aware of the state of the pets, and ensure they don’t skew too far out of compliance, thus adding to the overall complexity of the system. When building infrastructure with code, one is able to define, test and rebuild infrastructure though automated processes, thus creating an environment where it’s easy to build and destroy infrastructure leading to less state. When building immutable infrastructure with containers, we are able to define, test, build and persist packaged infrastructure. The key here is the ability to reason about a stateless piece of packaged infrastructure. The packaged infrastructure is versioned, signed and will not mutate state. Pair immutable packages with the added benefit of speedy container deployment and teardown, and we further reduce the complexity of the system while making it easier to replace infrastructure components.

Another interesting aspect of Chef Habitat is that the package metadata acts similarly to how we manage code dependencies, and offers many of the same benefits. We can reason about upstream changes and rebuild / redeploy packages based on those changes. For example, this application takes a dependency on core/openssl. If there was an important upstream change to that package (perhaps, a vulnerability was found), then we could be alerted of such a change and redeploy with the latest patch. This isn’t limited to direct dependencies. It could include the transitive closure of all of the application’s dependencies (all the app’s dependencies as well as all of the dependency’s dependencies recursively). This helps us to better understand the state of our system in the context of an ever changing ecosystem of dependencies.

The impact of Chef Habitat package metadata provides a higher level abstraction to packaged infrastructure, which allows us to reason about more than just dependent packages. The package metadata enables us to expose promises (intentions). For example, our application promises to expose a service on port 3000. In a more complex topology, this could allow another infrastructure component to bind to this promise. In fact, there a whole theory behind this promise stuff.

Promise theory may be viewed as a logical and graph theoretical framework for understanding complex relationships in networks, where many constraints have to be met

With infrastructure acting like autonomous agents describing their intentions, the system can reason about topology and self-organize. It almost starts to sound like a biological system. A system that is able to reason about it’s own organization lowers the complexity for operators of the system by automating organization and constraint resolution.

At the end of the day, we need to grok the state of our already complex systems. Anything we can do to limit complexity and provide more clarity into the state of our systems is leveling up our operation awareness and maturity.

--

--