KubeCon 2017 Demo — Istio and Brigade CI/CD

Brian Redmond
5 min readDec 6, 2017

--

I was fortunate enough to be selected to present at the 2017 KubeCon + CloudNativeCon conference in Austin, Texas. My session goal was to show how to integrate a service mesh such as Istio with a CI/CD process on top of Kubernetes. This post is a write-up of the details of the demo I showed so others could try it and potentially expand upon it. The code is in Github and you should be able to recreate using the steps below.

Code: https://github.com/chzbrgr71/kube-con-2017

Step 1: Install Kubernetes

For my demo, I used Azure (I do work at Microsoft) and acs-engine. I wanted to use kubernetes v1.7 with RBAC and I need to make some modifications to the apiserver for istio. So acs-engine is a good fit. Feel free to install using whatever method makes sense for you. Just be sure to have kubernetes version 1.7.3 or newer.

Step 2: Modify Kubernetes API Server

For this demo, you will need the Istio proxy to be automatically injected into application pods. This is done with a new Kubernetes feature called Initializers. For this, you will need to modify the api-server.yaml on your master node.

Restart the kubelet with: “sudo systemctl restart kubelet” I imagine it should be obvious that this will make the cluster unavailable temporarily.

Step 3: Add Istio

Download the latest istio release down to your local machine. I used v0.2.12. I suggest following the istio install steps right from https://istio.io.

# Install istio itself:
kubectl apply -f install/kubernetes/istio.yaml
# Install add-ons:
kubectl apply -f install/kubernetes/addons/prometheus.yaml
kubectl apply -f install/kubernetes/addons/grafana.yaml
kubectl apply -f install/kubernetes/addons/servicegraph.yaml
kubectl apply -f install/kubernetes/addons/zipkin.yaml

Step 4: Add Istio Initializer

For this demo, we will configure istio to only inject itself into our “microsmack” namespace as follows. First create the namespace:

kubectl create namespace microsmack

Then modify the istio-initializer.yaml for the initializer pod (located in the istio source in the /install/kubernetes/ directory). The ConfigMap section is shown below. We are only changing the namespaces parameter.

Then apply to your cluster.

kubectl apply -f install/kubernetes/istio-initializer.yaml

Validate that the istio pods are all running as below. You can validate the initializer with the sleep pod test shown here. https://istio.io/docs/setup/kubernetes/sidecar-injection.html

kubectl get pod -n istio-systemNAME                                 STATUS
grafana-2460282047-94l6z Running
istio-ca-293181461-ntnt2 Running
istio-egress-2098918753-97jst Running
istio-ingress-3288103321-r3xs8 Running
istio-initializer-2508681778-j3l34 Running
istio-mixer-4195966866-90qp0 Running
istio-pilot-1168925427-bhzwk Running
prometheus-4086688911-9frnb Running
servicegraph-4072321759-pg4jq Running
zipkin-3660596538-d8bs1 Running

Step 5: Deploy Brigade and Kashti

Brigade is simple to install. You need to make sure you have Helm working first. Note that RBAC is set to enabled.

# I used helm version 2.7.2
helm init --upgrade
helm repo add brigade https://azure.github.io/brigadehelm install -n brigade brigade/brigade --set rbac.enabled=true

We will also use Kashti to see the results of our builds. Install instructions are here: https://github.com/Azure/kashti/blob/master/docs/install.md

Step 6: Fork the Repo in Github

You’ll need your own Github account for this. My code is stored here: https://github.com/chzbrgr71/kube-con-2017.

In order to perform the CI/CD webhooks and change your code, you should fork this into your own GitHub account. Make note of the URL to your repo for Step 8.

Additionally, you will need a Github OAuth token to allow brigade to update the repo. You can create this here: https://github.com/settings/tokens/new Again, make note of this for Step 8.

Step 7: Create an Azure Container Registry

Follow the ACR docs to create your own Docker registry for this demo. https://docs.microsoft.com/en-us/azure/container-registry

I used an Admin user/password for me registry. You will use this along with your ACR login server in the brigade project. Make note of it for Step 8.

Note: In order for kubernetes to pull images from Azure Container Registry, the service principal used when creating your acs-engine cluster must have rights to the RG where the ACR instance is deployed. If the SP is a Contributor across the entire subscription, everything will just work. Otherwise, explicit rights will be needed or one could use an imagePullSecret.

Step 8: Create Brigade Project

A Brigade project holds the configuration data needed to run your pipeline or workflow. These settings will be stored as kubernetes secrets. You will create a local brig-project.yaml file similar to below that holds the values for your project and deploy this as a Helm chart. Do not store this file in the public since it contains secrets.

NOTE: Each Brigade project uses it’s own chart.

You will need to replace the values below with your environment. I used Slack for the KubeCon demo as well. If you are not using Slack, you can leave this step out for your demo.

Install as follows (again, don’t store this file in your repo):

helm install --name kube-con-2017 brigade/brigade-project -f brig-project.yaml

Step 9: Install the App

There are a few components that are not part of our CI/CD that will be setup in advance. The image for the web front-end is in Docker Hub. The code is in the repo in case you want to modify or re-build.

These resources will be deployed into the microsmack namespace.

# First install the web front-end deployment/service
kubectl create -f web.yaml -n microsmack
# Then the headless service for our api
kubectl create -f api-svc.yaml -n microsmack

Step 10: Setup the demo

On the first run of our demo, the initial deployment of our API and istio rules will be deployed. In order to trigger this CI/CD process, we must configure our webhook in Github to point to brigade.

Grab the EXTERNAL-IP for your Brigade Gateway service in your cluster. You can see this by running:

kubectl get service brigade-brigade-gw

Then

  1. Open your repo in Github: https://github.com/your_id/kube-con-2017
  2. Click on Settings and then Webhooks
  3. Add a new Webhook
  4. The payload URL will be: http://<your-external-IP>:7744/events/github
  5. Content type: application/json
  6. Secret: This is the sharedSecret that we used in our brigade project yaml in Step 8
  7. Use “Let me select individual events” and select Push and Pull Request

Step 11: Run the Demo

The first step to run the demo is to push an update to the master branch of your repo. The webhook should trigger a build and deploy a new version of the API. You can validate when it is completed by accessing the EXTERNAL-IP for the smackweb service. Use this IP in the next step.

Create activity:

# run the following from a bash promptwhile true; do curl http://your_external_ip:8080; sleep 1; done

Access Grafana:

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &# then browse to:
http://localhost:3000/dashboard/db/istio-dashboard

Access Zipkin:

kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=zipkin -o jsonpath='{.items[0].metadata.name}') 9411:9411 &# then browse to:
http://localhost:9411

Create a Pull Request

  • Create a dev branch in your repo
  • Update the “handlers.go” file in /smackapi folder in your repo’s dev branch
  • In Github, use this above update as a PR
  • Check updated results in dashboard
  • Merge and validate

Look at build results in the Kashti Dashboard.

Please provide feedback in my Github repo. Thanks for playing…

--

--

Brian Redmond

I am an Cloud Architect on the Azure Global Black Belt team at Microsoft. I focus on containers, microservices, DevOps, and cloud native applications in Azure