Level 1: Azure + Rails + Ember + CosmosDB

David Justice
Azure Developers
Published in
9 min readMay 11, 2017

Let’s level up our Azure DevOps and DevOpsSec. Join me in building a robust Rails + Ember application on Azure. This application illustrates an incremental increase to our “Level: 0” maturity model, and will leverage the following Azure services, Virtual Machine Scale Sets (VMSS), Content Delivery Network (CDN), Key Vault, CosmosDB, and Azure Active Directory.

If you would like to follow along at home, all of the code for this post is located in GitHub and is free for use. There are instructions for running the demo locally as well as deploying it out to Azure. If you find issues or would like to contribute, please do.

The entirety of this blog post we’ll be explaining this following script. The two key parts of this script are ./script/provision.sh, which provisions our infrastructure in Azure, and bundle exec cap production deploy:initial, which will deploy the Rails + Ember Todo List application onto our Azure infrastructure.

Provisioning and deployment script

You will need the following installed to run this demo:

For the folks who like video

The Rails + Ember Todo List

Rails + Ember Todo List

What’s the simplest application we can build and yet still be relatively real world, a todo list. This todo list application allows you to create a list of todo lists and todo items. The lists and items are displayed via a Ember.js single page application backed by a Rails API application. The Ember application communicates to Rails via JSONAPI over HTTP.

Rails + Ember Todo Item

The Rails API application uses Mongoid ODM framework for data persistence. Mongoid ODM is backed by CosmosDB (previously DocumentDB) using the MongoDB wire format. By using CosmosDB with MongoDB wire format, Rails + Mongoid thinks it’s talking to MongoDB, so we can use all of the tools we know and love that work with MongoDB. Also, by using CosmosDB we can take advantage of its managed data service which is truly global scale, so no more worrying about dealing scaling and clustering of a MongoDB cluster.

If you are not familiar with CosmosDB, I definitely recommend you take a look at it. It offers impressive global scalability and SLA’s for Latency, Throughput, Consistency and Availability at P99. For example, it guarantees end-to-end low latency at the 99th percentile to its customers. For a typical 1-KB item, Cosmos DB guarantees end-to-end latency of reads under 10 ms and indexed writes under 15 ms at the 99th percentile, within the same Azure region. The median latencies are significantly lower (under 5 ms).

Infrastructure Overview

Infrastructure components

Now that we have an understanding of the application we are going to deploy, let’s describe the infrastructure we are going to build to host this application. Our goal with this exercise is to create a real world application infrastructure which would provide for scaling and security.

For this blog post, we’ll only concern ourselves with hosting the application within a single region. We’ll expand to multiple regions in a future post.

The following are the Azure infrastructure services and their role in hosting this application:

  • Virtual Machine Scale Sets: provide a set of virtual machines to host Nginx, Rails and Azure CLI, which we can scale up and down as needed. These virtual machines will be provisioned with a certificate from Key Vault via the Azure provisioner to enable the applications running on the machines the ability to access other Azure resources.
  • Load Balancer: load balance public HTTP and SSH traffic across the virtual machines in the scale set. The load balancer is configured to probe for the health of each machine in the cluster.
  • Virtual Network (VNET): provide a network security boundary encapsulating the infrastructure.
  • Key Vault: provide a controlled access secret store for infrastructure and services.
  • CosmosDB: provide a managed data service which is exposed via the MongoDB wire format.
  • Azure Active Directory (AAD): provide an identity store for access to Azure management APIs. We’ll use AAD to create identities for the applications executing on the virtual machines. This will enable the application to fetch secrets.

Provisioning Infrastructure

Now that we’ve covered the components of the infrastructure and goals of the application, we can now start pulling the pieces together into a provisioning script. In “Level: 0”, we used a combination of scripts and SSH’ing into the machine to install / configure the machine. We’ll level up incrementally in this post and use Azure CLI and cloud-init to provision the infrastructure. There will be no SSH’ing into the machines (until we deploy the Rails + Ember bits via Capistrano) to manually configure. All base configuration will be handled by the cloud-init script.

Cloud-init is the standard for customizing cloud instances. It provides a simple yaml interface for describing common tasks such as package management / installation, script execution, etc…

Cloud images are operating system templates and every instance starts out as an identical clone of every other instance. It is the user data that gives every cloud instance its personality and cloud-init is the tool applies user data to your instances automatically. — Canonical

cloud-init.yml

The cloud-init is the template we’ll use in our deployment script. As you can probably tell, even if you are not terribly familiar with cloud-init syntax, is that it’s registering a package repository, installing some packages, then finally, running some scripts to prepare the machine. In fact, it’s installing all of the prerequisites for running the Rails application.

Obviously, cloud-init is much cleaner than hand executed scripts. It’s definitely a step up in maturity.

Pay close attention to line 30–33. We’ll cover this in more detail later, but it’s combining a public and private PEM certificate placed there by the Azure provisioner via Key Vault. That certificate is then used to authenticate Azure CLI, so that the machine will have a pre-authenticated Azure CLI instance to be able to leverage during code deployment to fetch secrets. Next we’ll see how that certificate get’s placed there on the virtual machine.

The following Bash script will provision all of the infrastructure needed for running the Rails + Ember todo list. I’ll go through it section by section below.

Azure CLI provisioning script

Break down of the provisioning script:

  • Line 4: bail quickly on the script if it hits any errors
  • Line 6–14: setup variables to allow the script to be relatively simple for others to use and deal with globally unique values like DNS entries
  • Lines 17–20: login if not logged in
  • Lines 22–27: Create the level1 resource group if it doesn’t exist
  • Lines 29–36: Create the Key Vault if it doesn’t exist
  • Lines 38–46: Create the certificate to be used for the Azure Active Directory Service Principal which will be the identity of the applications running on the virtual machines. Create the Rails secret key, which will be used by each of the Rails instances (if this wasn’t a demo, I wouldn’t store this value in source code).
  • Lines 48–53: Create a CosmosDB instance using the MongoDB wire format.
  • Lines 56–58: Delete the Azure Active Directory application and service principal if it already exists, so we can create another one with the same name cleanly.
  • Lines 60–63: Download the public side of the Key Vault certificate created earlier and create an Azure Active Directory Application and Service Principal using the public side of the Key Vault certificate.
  • Line 66: Give the Service Principal access to read secrets in the instance of Key Vault.
  • Lines 69–70: Create a variable containing the virtual machine secrets reference to the certificate in Key Vault to be used to provision the Virtual Machine Scale Set. This will tell the Azure provisioner to place the certificate in /var/lib/waagent/{finger_print}.[crt|prv] on the virtual machine.
    Remember how we said we’d touch back on the public and private certificate (PEM) cat’ed together in the cloud-init script, well, this is where the certificate comes from. The Azure provisioner will request the certificate on behalf of the user deploying the virtual machine and place the public and private side of the certificate on the provisioned virtual machine. By using this as the secret for our service principal, we are able to pre-authenticate Azure CLI.
  • Lines 72–74: Use sed to replace {{fingerprint}}, {{tenant}}, and {{username}} values in the cloud-init to specialize the cloud-init script for the newly created infrastructure.
  • Line 75: Provision the Virtual Machine Scale Set with three instances, and specifying the specialized cloud-init script and secrets (the Key Vault certificate) to be provisioned on the virtual machines.
  • Lines 79–85: Configure the load balancer for traffic and health checks on port 80 routing back to the backend address pool containing the newly created virtual machines.
  • Lines 90–99: Provision Azure CDN fronting the public fully qualified domain name of the Virtual Machine Scale Set created on line 75. We’ll use the CDN to serve the Javascript and CSS assets for the site.
  • Lines 101–106: We are done! Provide some helpful output.
Capistrano deploy.rb

Once provision.sh has run, the virtual machines in the scale set should be ready for code deployment via Capistrano. Let’s take a look at the changes that have been made to the deploy.rb file.

  • Line 4: Though we are still using Capistrano, we needed to add a bit to the deploy.rb file to ensure we are targeting all of the virtual machines in the scale set. To ensure they are targeted dynamically based on the number of VMs in the set, we shell out to Azure CLI to provide the list.
  • Line 56–66: Build the Ember application
  • Line 58: shell out to Azure CLI again to fetch the fully qualified domain name for the public IP of the load balancer.
  • Line 62: replace the Ember environmental configuration for the production API to be the public fully qualified domain name for the load balancer. In practice, you probably will be able to have a more static value. This was done here so that it will work well with a dynamic value.
  • Line 63: Set the CDN_HOST value so that Ember will build the asset links using the Azure CDN host returned by shelling out to Azure CLI for the fully qualified domain name of the CDN endpoint.

DevOps Security Level Up

At no time was the private side of the Key Vault certificate, which was used as the secret for the virtual machines service principal identity, ever resident in memory or on disk in the developers machine. The certificate was generated in Key Vault and will only be handled by the Azure provisioning service. The certificate used here has never left the production boundary, and should never need to come in contact with humans. This is definitely a level up from Level 0.

Anatomy of a Bootstrapped Rails Application

Upon Capistrano deployment, the Rails + Mongoid application will try to startup. It will require a couple secrets at a minimum.

  1. secret_key_base
  2. Mongoid connection string
Rails secrets via Azure CLI

Look to lines 10–13. Upon startup, if we are running in production, we’ll shell out to Azure CLI to request the Key Vault secret for the secret_key_base. Since we have a pre-authenticated Azure CLI with access to that Key Vault, we are able to fetch that secret.

CosmosDB connection string via Azure CLI

Look to lines 25–29. Upon startup, if running in the production environment, the application will shell out to Azure CLI to fetch the connection strings to it’s instance of CosmosDB. Since we have a pre-authenticated Azure CLI with access to CosmosDB, we are able to retrieve connection strings for our application.

Want to run this locally?

Run the demo locally

Want to deploy this too?

Provision and deploy the demo

Leveling up to 1

This post concentrates on deploying a Rails application, but the ideas and principals outlined in this post could just as easily be applied to a Java, .NET, Golang, etc. application. There is nothing special about the application tier or the infrastructure provisioned. Leveling up your DevOps maturity translates across language and platform boundaries.

Almost every application requires bootstrapping with secrets. That bootstrapping needs to be done in a responsible manor taking care to ensure that credential usage is tracked and sensitive information stays within secure domains. This just one way this bootstrapping problem can be approached. I think you will agree that this is a level up from the previous post.

The less human interaction with provisioning and deployment is goodness. In this post we’ve moved from hand rolled virtual machines to building with scripts and cloud-init. This is definitely an improvement, but not even close to mature.

Leveling up to 2

DevOps is such a wide term. We have only covered a small topic area, which has focused on deployment of infrastructure and applications. As we progress in maturity, we’ll start to expand our topic area into continuous delivery, monitoring, and other related topic areas.

In my next post, we’ll dive deeper into infrastructure as code and level up again using Ansible on Azure and add some monitoring.

--

--