Istio is not just for microservices

Secure your Kubernetes platform services by using Istio Service Mesh

Todd Kaplinger
IBM Cloud
12 min readJun 14, 2017

--

Overview

What is Istio? An open platform to connect, manage, and secure microservices

Ingredients

To get started, you should have an elementary understanding of Kubernetes and have installed Minikube, Docker, and Node.js locally. In this tutorial, I will demonstrate some basic concepts around deploying Docker containers in combination with Istio. Since I find that writing an application helps users understand how to apply concepts to their own use cases, I wrote a basic Node.js application to show the interaction between Istio and etcd.

Simple Scenario

Because Kubernetes use is focused on cloud native applications, I wrote a very basic application that demonstrates how to store in and retrieve data from etcd using Node.js Express. For those not familiar with etcd, etcd is a distributed key-value store from CoreOS. The reason why I chose etcd is that it exposes HTTP APIs and this functionality can be applied to other cloud platform services such as Elasticsearch and CouchDB which also provide REST APIs as their main method for storing data.

For this step, create a directory in the workspace called nodejs, then save the following code in this directory in a file named server.js.

server.js

const http = require('http');
const Etcd = require('node-etcd');
const express = require('express')
const app = express()

const bodyParser = require('body-parser');
app.use(bodyParser.json()); // for parsing application/json

const scheme = "http";
const ipAddress = "example-client" // Kubernetes service name for etcd
const port = "2379";
const connectionAddress = scheme +"://" + ipAddress +":" + port;

app.get('/', function (request, response) {
response.send('Hello World!');
});

app.get('/storage/:key', function (request, response) {
var etcd = new Etcd([connectionAddress] /*, options */);
etcd.get(request.params.key, function(err, res){
if(!err){
response.writeHead(200);
response.write("nodeAppTesting("+ ipAddress+") ->"+ JSON.stringify(res) ) ;
response.end();
}else{
response.writeHead(500);
response.write("nodeAppTesting failed("+ ipAddress+") ->"+ JSON.stringify(err) ) ;
response.end();
}
});
});

app.put('/storage', function (request, response) {
var jsonData = request.body;
var etcd = new Etcd([connectionAddress] /*, options */);
etcd.set(jsonData.key, jsonData.value, function(err, res){
if(err){
response.writeHead(500);
response.write (JSON.stringify(err) );
response.end();
}else{
response.writeHead(201);
response.write("nodeAppTesting created("+ ipAddress+") ->"+ JSON.stringify(jsonData) ) ;
response.end();
}
});
});

app.listen(9080, function () {
console.log('Example app listening on port 9080!')
});

The code above exposes three service API commands. The first one is a basic hello world command that has no dependency on etcd. I used this one to quickly test that my simple application is working and that all of my dependencies installed properly. The next two are etcd-based create (PUT) and retrieve (GET) commands that are incarnations of the traditional CRUD operations that REST API developers often create. As you can see, each of the etcd API commands creates a new etcd connection object that uses the connection address (scheme/hostname/port). We will learn where that value came from in the next section. Next, generate the package.json for this application. The simplest way that I have found to do this is to run the npm install command for each of 4 required modules and then run the npm init command with the default values.

$ npm install http
$ npm install node-etcd
$ npm install express
$ npm install body-parser
$ npm init

The goals for this section are to understand the flow for inserting and retrieving data from etcd by using the Node.js module node-etcd and to provide a quick refresher about creating a Node.js application.

Deploy etcd Kubernetes

Now that I packaged my application, I need to test it. For this recipe, I used a prebuilt etcd Operator to deploy an etcd cluster and Minikube to deploy the Node.js application. Since the Node.js application depends on the deployment of an etcd service, deploy that first.

In your workspace, create a new directory for etcd. The directory is peer to the nodejs directory that you created in the last step. For the sake of simplicity, I named my directory “etcd”. When deploying the etcd cluster using Operator, you first need to register the etcd Operator as a Kubernetes Third Party Resource (TPR). The etcd Operator is described by a yaml file that references the etcd-operator image from CoreOS. Create this yaml file in the etcd directory:

deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-operator
spec:
replicas: 1
template:
metadata:
labels:
name: etcd-operator
spec:
containers:
— name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.2.6
env:
— name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
— name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name

If you haven’t already, start Minikube by running this command.

$ minikube start — vm-driver=xhyve

To install the etcd Operator and verify the Operator has been installed by querying the third-party resources for your cluster, run the following kubectl commands.

$ kubectl create -f deployment.yaml
$ kubectl get thirdpartyresources

If the deployment executed correctly, you can see that a third-party resource named “cluster.etcd.coreos.com” is registered with your Kubernetes environment.

Now that you deployed the third-party resource that performs the Operator functions for etcd, you can now create a cluster that leverages Operator. Create this yaml file in the etcd directory:

example-etcd-cluster.yaml

apiVersion: "etcd.coreos.com/v1beta1"
kind: "Cluster"
metadata:
name: "example"
spec:
size: 5
version: "3.1.4"

The syntax for deploying a cluster is quite simple and straightforward. In the above yaml, the deployment will be of type Cluster, and have an initial size of 5, and leverage the etcd Operator from CoreOS.

Now that you defined the cluster, you can execute the kubectl command with the apply option to deploy it. By using the apply option, you can modify this yaml file to make changes to the cluster such as scaling up or scaling down the instances and simply rerun the command to update the cluster.

$ kubectl apply -f example-etcd-cluster.yaml

After successfully running this command, check that the pods have completely initiated and are in running state (kubectl get pods). For the purpose of this exercise, the most interesting service is the “example-client” service, which connects your Node.js Application to the etcd service. You defined this service name in your Node.js application.

Deploy Node.js Application to Kubernetes

Now that the etcd service deployed, deploy the Node.js application. To get started, the first step is to package the Node.js app as a container.

The configuration below packages the Node.js app as a basic Docker container and exposed the port 9080, which the app uses for listening for HTTP requests. Within the container are the dependencies that were added as part of the npm install and, of course, the Express app. In the root of your application workspace (peer to the server.js and package.json files), create a Docker file that contains the following text:

Dockerfile

FROM node:6.9.2# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 9080
CMD [ "npm", "start" ]

Great! Now that you have the Dockerfile, you need to build the image and deploy it to MiniKube for testing. Using the set of commands below, I will show you how deploy the Docker container and to verify that the deployment was successful.

$ eval $(minikube docker-env)
$ # Delete the previous deployment, if present, from Minikube.
kubectl delete deployment etcd-node
$ # Build the Docker image that is based on the Dockerfile you created, and denote that this is version 1 of the image.
$ docker build -t etcd-node:v1 .
$ # Deploy version 1 of the image to Minikube. Note that the port number matches both the port that you exposed in the Dockerfile and the port that you defined in the server.js file.
$ kubectl run etcd-node --image=etcd-node:v1 --port=9080
$ #Expose a dynamically generated port to use when testing the Docker image.
$ kubectl expose deployment etcd-node --type=NodePort
$ # Query Minikube and Kubernetes to get the minikube ipAddress and service's NodePort.
$ ipAddress=$(minikube ip)
$ port=$(kubectl get svc etcd-node -o 'jsonpath={.spec.ports[0].nodePort}')
$ #Verify that create and retrieve API calls are successful reaching the Node.js app.
$ curl http://$ipAddress:$port/storage -H "Content-Type: application/json" -XPUT -d '{"key": "item1", "value":"Hello world"}'
$ curl http://$ipAddress:$port/storage/item1

The application works, and the simple scenario is complete. However, none of the transport communications are secured using TLS/SSL. Next, you refactor this application to secure the HTTP transport communication.

Security beyond your application tier

Software architecture typically refers to the bigger structures of a software system, and it deals with how multiple software processes cooperate to carry out their tasks. Web application design patterns often depict a multi-tier architecture that consists of components, such as web servers and application services, and persistent services, such as databases and caching. In this example, we have two communication channels to secure. The most obvious channel is the Node.js HTTP endpoints that the application exposes as RESTful APIs. The less obvious one, and the focus of this article, is the transport communication between the application and the etcd service.

One major area of annoyance for developers involves creating and managing SSL certificates for their applications and web servers. The SSL certificate management process often involves multiple parties, and in the public cloud, this problem is exacerbated because your applications often share the same environment across a set of tenants. Data on the wire becomes one of the most severe security exposures, and companies have low tolerance for this type of security escape. Despite these concerns, there are still services that do not support native transport level security by default, and developers often must create their own solutions to ensure data security, such as creating application level data encryption before transmitting it on the wire.

Applying Secure Engineering with Istio

A few months ago, I was developing a prototype for microservices best practices, and I needed to secure transport level communication between my microservices and back end systems. At that time, I came across Istio and its vast set of capabilities around intelligent routing, versioning of APIs, resiliency against service failures, and security. For a great overview of Istio, I strongly suggest you read their overview document that explains the capabilities in great detail.

Reviewing all of Istio’s capabilities is beyond the scope of a single article. In this article, I use both Istio’s side car approach for pod to pod communication and its Ingress capabilities acting as an HTTP gateway to your application.

Installing Istio

The documentation for installing Istio is also very good. Follow it to install Istio. In step 5, ensure that you enable Mutual TLS by running this command:

$ kubectl apply -f install/kubernetes/istio-auth.yaml

Integrating Istio into my application

When you use Istio, you inject its services into existing Kubernetes YAML deployment files. After you install Istio, you can access the istoctl CLI tool from your development environment. This CLI works in concert with kubectl and becomes part of your toolbox for deploying applications to Kubernetes.

You must configure your Kubernetes pod deployment to be compatible with the services that Istio can inject. Defining a service that supports HTTP transport is an important Istio injection trigger. In addition to the deployment YAML we describe in the next section, modify the Node.js application to update the ipAddress parameter, which maps to the service name that we will be configured in the YAML file.

On line 10 of the server.js file, replace “const ipAddress = “example-client” // Kubernetes service name for etcd” with “const ipAddress = “etcd-service” // Kubernetes service name for etcd” .

Next, recreate the Docker image and push version number “v2” to the registry on DockerHub. Replace “todkap” with your Docker user name.

$ eval $(minikube docker-env)
$ export DOCKER_ID_USER="todkap"
$ docker login
$ docker build --no-cache=true -t todkap/etcd-node:v2 .
$ docker push todkap/etcd-node:v2

Deploying new workload with Istio

To simplify integration with Istio, I created a new deployment yaml for my application. This all-in-one yaml file defines not only the Node.js application deployment and service and the etcd deployment and service but also an Ingress rule that acts as a proxy to the application. For this step, create a directory in the workspace called istio_deployment, and then create an all-in-one-deployment.yaml file that contains this code in the directory:

all-in-one-deployment.yaml

apiVersion: v1
kind: Service
metadata:
name: etcd-node
labels:
app: etcd-node
spec:
ports:
- port: 9080
name: http
selector:
app: etcd-node
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-node-v2
spec:
replicas: 1
template:
metadata:
labels:
app: etcd-node
version: v1
spec:
containers:
- name: etcd-node
image: todkap/etcd-node:v2
imagePullPolicy: Always
ports:
- containerPort: 9080
---
##################################################################################
# etcd service
##################################################################################
apiVersion: v1
kind: Service
metadata:
name: etcd-service
labels:
app: etcd
spec:
ports:
- port: 2379
name: http
selector:
app: etcd
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-v1
spec:
replicas: 1
template:
metadata:
labels:
app: etcd
version: v1
spec:
containers:
- name: etcd
image: quay.io/coreos/etcd:latest
imagePullPolicy: Always
ports:
- containerPort: 2379
---
##################################################################################
# Ingress Routing
##################################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-ingress
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /storage
backend:
serviceName: etcd-node
servicePort: 9080
- path: /storage/.*
backend:
serviceName: etcd-node
servicePort: 9080
- path: /foo
backend:
serviceName: etcd-node
servicePort: 9080
---

Now that you created the yaml file, review the impact that injecting Istio has on the deployment artifact.

Using the Istio CLI, run the following command:

$ istioctl kube-inject -f all-in-one-deployment.yaml

If the injection command succeeded, you can see the Istio sidecar injected as an annotation similar to this:

example:
annotations:
alpha.istio.io/sidecar: injected
alpha.istio.io/version: jenkins@ubuntu-16-04-build-de3bbfab70500-0.1.5-21f4cb4
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}},{"args":["-c","sysctl
-w kernel.core_pattern=/tmp/core.%e.%p.%t \u0026\u0026 ulimit -c unlimited"],"command":["/bin/sh"],

You will find similar annotations in both your Node.js application and the etcd service. These sidecars act like traffic cops and route traffic securely between your pods. The sidecars are installed as init containers that are co-located with your deployments in each pod and act as secure proxies. In this scenario, the traffic flows like this: the Node.js application uses HTTP to communicate with the Node.js sidecar, which uses HTTPs to communicate with the etcd sidecar, which uses HTTPs to communicate with etcd. This method provides a secure transport for pod to pod communication without modifying the application logic.

Now that you have reviewed the code injection, deploy the application by using the kubectl command line:

$ kubectl create -f <(istioctl kube-inject -f all-in-one-deployment.yaml)

This command deploys the set of services to Kubernetes with the Istio capabilities running as a sidecar.

Verify that your pods are running and ready for validation. Use kubectl to get the list of pods and check their status to ensure that they are active.

$ kubectl get pods

Validating the integration

Now that the code is deployed and running, test that the deployment was successful. Modify the original test commands to use the Ingress to obtain the NodePort for the Ingress service.

$ # Query Minikube and Kubernetes to get the ipAddress and port number.
$ ipAddress=$(minikube ip)
$ port=$(kubectl get svc istio-ingress -o 'jsonpath={.spec.ports[0].nodePort}')
$ #Verify that the create and retrieve API calls are successfully reaching the Node.js app.
$ curl http://$ipAddress:$port/storage -H "Content-Type: application/json" -XPUT -d '{"key": "istioTest", "value":"Testing Istio using Ingress"}'
$ curl http://$ipAddress:$port/storage/istioTest

Now that you validated the flow, take a closer look at the process. The Istio proxy automatically creates audit logs that display its HTTP access data. To verify that the sidecar applied a proxy to the HTTP calls between the application and etcd, locate the pairs of API calls between the client and server in the logs.

First, obtain the CLIENT and SERVER pod names using the Kubernetes CLI and store them in variables. Run these commands:

$ CLIENT=$(kubectl get pod -l app=etcd-node -o jsonpath='{.items[0].metadata.name}')
$ SERVER=$(kubectl get pod -l app=etcd -o jsonpath='{.items[0].metadata.name}')
$ #Search the client logs for the API calls to etcd.
$ kubectl logs $CLIENT proxy | grep /v2/keys

The search returns a set of log entries. Review the contents of the most recent entry, which resembles this output:

[2017-06-13T19:21:02.542Z] "PUT /v2/keys/istioTest HTTP/1.1" 201 - 39 118 24 22 "-" "-" "db0a384c-7bee-9f96-9cad-ad78f4b47098" "etcd-service:2379" "172.17.0.10:2379"
In this log entry, the request id value is "db0a384c-7bee-9f96-9cad-ad78f4b47098".

Search the server logs for the same API request id that was seen in the CLIENT logs. This correlation id is a key component of Istio and is a great way to track requests throughout the Kubernetes cluster.

$ kubectl logs $SERVER proxy | grep db0a384c-7bee-9f96–9cad-ad78f4b47098[2017–06–13T19:21:02.551Z] “PUT /v2/keys/istioTest HTTP/1.1” 201–39 118 12 1 “-” “-” “db0a384c-7bee-9f96–9cad-ad78f4b47098” “etcd-service:2379” “127.0.0.1:2379”

You have now verified the end to end flow using Istio!

Securing the front door

So far, we have focused on only the internal communication inside of our Kubernetes cluster. While this is great, we still have not addressed the lack of transport security between our client, say a browser, and the Ingress, the application’s front door. The great news is that one of my colleagues has written an article that demonstrates creating automated digital certificates by using Let’s Encrypt. In this article, Sachin demonstrates how annotations can be applied to the deployment of your Ingress to automatically create self-signed certificates for your domain. You can avoid the typical manual process for creating your own custom SSL certificate to easily secure your application’s front door.

Conclusion

In this article, I demonstrated a novel way to leverage the amazing security capabilities of Istio for platform services. If you are interested in trying out another use case for deploying Istio in your Kubernetes deployments, check out the Book Info sample application. This sample app applies some of same security concepts to microservices-based applications.

Originally published at developer.ibm.com.

--

--

Todd Kaplinger
IBM Cloud

Vice President SW Engineering — Chief Architect, Retail Solutions@NCR Voyix. The opinions expressed here are my own. Follow me on Twitter @todkap