Talking a good DevOps game: Google Container Engine deployments, Kubernetes Dashboards and Google Home

Nithin Mallya
Google Cloud - Community
8 min readNov 2, 2017

This is a followup to my previous article (Let’s talk Deployments with Google Home, CircleCI and Google Container Engine) and goes into more detail about:

  • Organizing your GKE deployments.
  • Creating custom Kubernetes Dashboards with the Kubernetes APIs.
  • Using the Google Assistant to load dashboards

Background: In past articles, we’ve deployed applications (mostly Rails apps) to GKE and used different techniques to achieve Zero Downtime Deployments using Kubernetes. The techniques in this article can be used for applications built in other languages as well.

We also saw a proof of concept where we could use the Google Assistant to kick off builds and deployments to GKE by integrating with RESTful APIs from CircleCI. In that article, I alluded to hooking the Assistant up with the Kubernetes API to show custom dashboards.

This article deals with the following in brief:

  1. What does a typical GKE deployment model look like?
  2. How to deploy your application to the various environments (QA, Staging, Production etc.)
  3. A quick look at the Kubernetes Web UI (Dashboard)
  4. How to use the Kubernetes APIs to implement custom dashboards
  5. Fun feature: How to show this dashboard or pretty much any other web page via voice command with the Google Assistant

Ok, let’s address each of these points.

What does a typical GKE deployment model look like?

In the image below, you see a sample Rails Application that does the following:

  • Allows users to post dog pictures. This application is deployed in GKE.
  • It also runs background processes (workers) that process these images and identify the dog’s breed and other characteristics and persist these details in the database
  • It uses CloudSQL to store data, StackDriver for logging/monitoring/alerting, Pub/Sub for async event processing and Storage buckets for storing the dog pictures.
  • The application’s users will access the website via HTTPS.
A standard deployment model for a Rails app and workers in Google Container Engine (GKE)

Under the hood, you will see an Ingress (routing rules) that does TLS termination, a Service (Load Balancer) that sends the requests to the web application instances(Deployments). There are separate deployments for the web application and for the workers (async processes). The ingress and the deployments will use Secrets (for environment variables) to access credentials needed to talk to third party services etc. Since the app is also a good citizen, it will use the CloudSQL Proxy to open secure connections to your database.

Great! You have your first deployment in GKE. Now all you have to do is to replicate this across your multiple environments (QA, Staging, Production….) and you’re done! :-)

How to deploy your application to the various environments (QA, Staging, Production etc.)

Time for the second diagram. Here’s one way to organize it:

Automated builds and deployments to multiple GKE environments

There is a lot of information in the above diagram.

Key features:

We have 3 environments now (QA, Staging or pre-production and Production). Production has 2 types of deployments (blue and green). I’ll explain more about “cyan” later. All the build/test/deploy processes are fully automated and orchestrated by CircleCI based on different triggers in the Github repository.

  1. Code merged to the develop branch gets deployed automatically to the cluster in the “my-qa” GCP project via CircleCI
  2. Code merged to the master branch gets deployed automatically to the cluster in “my-staging” GCP project
  3. When you ‘git tag’ the master branch, a deployment happens automatically to a cluster within the same “my-prod” GCP project and gets deployed to a “cyan” colored deployment on the same cluster. It also gets deployed to a “non-active color” in the production server. Example: a deployment is done to the pods marked blue deployment if the current “color” is green.
  4. The team can now internally test the features on “smoketest.mywebsite.com” which points to the cyan deployment. At this point we have the new version of our application on the cyan deployment AND the non-active blue deployment. Green still supports the current version in Production.
  5. Once everything looks good on “smoketest.mywebsite.com” you can flip a switch and point your production app (myapp.mywebsite.com) to the newly deployed app in the “blue colored” deployment above.
  6. To keep the data model and the data current on the database front (for Rails applications), we have a Kubernetes Pod (the “db-updater” in the diagram above) that is started during the deployment to run migrations and seeds and the deployment will stop if it runs into any errors. With some creative querying, you can find out when the job has run and its output. This Pod will be deleted at the end of the process but you can still see the logs in the CircleCI job that passed/failed.

Note: If the blue/green and now cyan(!) terminology is not making much sense, please see my article on this topic

How to leverage the default Kubernetes Dashboard

Once we have the above deployments in place, then comes the part of ensuring that our deployments look good and there are no restarts etc. We need a way to be able to access this information visually in addition to watching StackDriver Dashboards, setting alerts with the right thresholds etc.

Tip: It is always good practice to look at the pods in production to see if there have been any new restarts as these are trickier to catch via the usual monitoring and alerting approaches.

Users familiar with the kubectl command know that Kubernetes provides a very clean, user friendly dashboard when you run the command below. You can do a LOT with this dashboard (creating deployments, services, secrets, increasing/decreasing replicas etc.), and this should suffice in most cases.

kubectl proxy

Making a case for additional features: Sometimes, with distributed teams and with teams in different time zones, when there is a need for production pushes, the team member(s) requesting the production deployment might not necessarily be familiar with the deployment model or with the gcloud and kubectl tools. In such cases, it might be helpful to have a custom dashboard that is catered more specifically towards the applications that we own and which can be easily accessed and understood by the team. There are various ways to achieve this and they are discussed in the next section

How to use the Kubernetes APIs to create custom Dashboards.

IF and when there is a need to go beyond using the default Kubernetes Dashboard, we can look into creating our own dashboards by invoking the Kubernetes RESTful APIs. This can be achieved by:

The image below shows a simple Dashboard that we can use to keep track of all our environments as well as to flip our deployments to the corresponding colors.

Highlights below:

  • See all our Kubernetes artifacts and their states (restart counts etc.). You can see our application is deployed to webserver-blue, webserver-green and webserver-cyan deployments. We also have async worker processes (web-worker, phoenix-worker)
  • Deployment timestamps to see when each deployment occurred and the image that was used.
  • The ability to switch to a different “colored” deployment with one click
  • The ability to rollback the workers to a previous version (not shown in the diagram)
  • Automated Slack notifications when a Production color switch is done, with details about who did the change and when
  • Chaos Monkey features (sending invalid data, testing connectivity breaks etc.) and more..
Sample DevOps Dashboard that shows the various Kubernetes artifacts and allows the user to modify deployments

The HOW:

Option I: As mentioned earlier, you can use the official Kubernetes client libraries (Go/Python) and build a dashboard leveraging either these libraries or the community supported ones. More details available here

Option II: Create a simple Node.js Express App that hits the various Kubernetes endpoints on our master API servers and fetches the data. You would need to know the IP Addresses of the API servers in your clusters beforehand in order to achieve this. With this approach, one app can serve the DevOps dashboard as it is able to fetch deployment specifics from multiple clusters in different projects. The above image shows this approach

When you run a kubectl command with the verbosity set to 8:

kubectl <your command> --v=8

You will get the IP address of the API server and the actual GET/PATCH call that is invoked.

For example, if you run:

kubectl get deployment —-v=8

You will see output that looks something like:

GET https://<API-SERVER-IP-ADDRESS>/apis/extensions/v1beta1/namespaces/default/deployments

Option III:

Another option would be to spin up a container in the cluster, that runs a ‘kubectl proxy’ locally and can be accessed by an application running in that container via the http://localhost:8001/….. endpoint.

Note: this approach would be required for each individual cluster.

Fun feature: How to show this dashboard (or any other web page) via voice command with the Google Assistant

Here are a couple of ways to display your dashboard by voice command (“Show me the DevOps Dashboard”) :

  1. You can use Google Pub/Sub and a local lightweight client that acts as the Pub/Sub consumer to load the Dashboard page (see my previous article on how to create the Entities, Intents and Fulfillment endpoints for the Google Assistant)

The Cloud Function snippet in node.js would look like:

const PubSub = require('@google-cloud/pubsub');// Instantiates a client
const pubsub = PubSub();
function publish (req, res) {
console.log(`Publishing message to topic <YOUR TOPIC>`);
const topic = pubsub.topic('<YOUR TOPIC>');
const message = {
data: {
url: '<MY DASHBOARD URL'
}
};
// Publishes a message
return topic.publish(message)
.then(() => res.status(200).send('Message published.'))
.catch((err) => {
console.error(err);
res.status(500).send(err);
return Promise.reject(err);
});
};

The local Desktop client (Pub/Sub consumer) in Ruby would look like the example below (it runs continuously and simply opens a browser window with the URL obtained from the Pub/Sub message).

# Local Pub/Sub client (ruby)require 'google/cloud/pubsub'pubsub = Google::Cloud::Pubsub.new(
project: '<MY GCP PROJECT>',
keyfile: '<MY SERVICE ACCOUNT CREDENTIALS JSON FILE'
)
loop do
sub = pubsub.subscription '<MY SUBSCRIPTION NAME>'
msgs = sub.pull
sub.acknowledge msgs
received_message = msgs[0] unless msgs.nil?
next if received_message.nil?
json_obj = JSON.parse(received_message.data)
url = json_obj['data']['url']
`open #{url}`
end

Note: Due to the asynchronous nature of this approach, be prepared for it to take about ~10 secs to load the dashboard.

2. Using a custom endpoint in a Node.js application with socket.io to send the user’s requested URLs to the client browser in real time. You would need to use the socket.io server and client libraries to achieve this. This will obviously be faster than the above approach.

Conclusion: We’ve covered a lot of ground that describes in detail some of the deployment strategies that have helped us in the past. Hopefully, this article achieves what it set out to do: sharing different approaches to deploying our applications to GKE while taking it up a notch by using more modern ways of viewing Dashboards with the Google Assistant.

--

--

Nithin Mallya
Google Cloud - Community

Engineering Leader. (Amazon, Audible, Amex, PayPal, eBay). All views are my own.