Istio is not just for microservices

Secure your Kubernetes platform services by using Istio Service Mesh

Todd Kaplinger
IBM Cloud
12 min readJun 14, 2017

--

Overview

What is Istio? An open platform to connect, manage, and secure microservices

Ingredients

To get started, you should have an elementary understanding of Kubernetes and have installed Minikube, Docker, and Node.js locally. In this tutorial, I will demonstrate some basic concepts around deploying Docker containers in combination with Istio. Since I find that writing an application helps users understand how to apply concepts to their own use cases, I wrote a basic Node.js application to show the interaction between Istio and etcd.

Simple Scenario

Because Kubernetes use is focused on cloud native applications, I wrote a very basic application that demonstrates how to store in and retrieve data from etcd using Node.js Express. For those not familiar with etcd, etcd is a distributed key-value store from CoreOS. The reason why I chose etcd is that it exposes HTTP APIs and this functionality can be applied to other cloud platform services such as Elasticsearch and CouchDB which also provide REST APIs as their main method for storing data.

For this step, create a directory in the workspace called nodejs, then save the following code in this directory in a file named server.js.

server.js

The code above exposes three service API commands. The first one is a basic hello world command that has no dependency on etcd. I used this one to quickly test that my simple application is working and that all of my dependencies installed properly. The next two are etcd-based create (PUT) and retrieve (GET) commands that are incarnations of the traditional CRUD operations that REST API developers often create. As you can see, each of the etcd API commands creates a new etcd connection object that uses the connection address (scheme/hostname/port). We will learn where that value came from in the next section. Next, generate the package.json for this application. The simplest way that I have found to do this is to run the npm install command for each of 4 required modules and then run the npm init command with the default values.

The goals for this section are to understand the flow for inserting and retrieving data from etcd by using the Node.js module node-etcd and to provide a quick refresher about creating a Node.js application.

Deploy etcd Kubernetes

Now that I packaged my application, I need to test it. For this recipe, I used a prebuilt etcd Operator to deploy an etcd cluster and Minikube to deploy the Node.js application. Since the Node.js application depends on the deployment of an etcd service, deploy that first.

In your workspace, create a new directory for etcd. The directory is peer to the nodejs directory that you created in the last step. For the sake of simplicity, I named my directory “etcd”. When deploying the etcd cluster using Operator, you first need to register the etcd Operator as a Kubernetes Third Party Resource (TPR). The etcd Operator is described by a yaml file that references the etcd-operator image from CoreOS. Create this yaml file in the etcd directory:

deployment.yaml

If you haven’t already, start Minikube by running this command.

To install the etcd Operator and verify the Operator has been installed by querying the third-party resources for your cluster, run the following kubectl commands.

If the deployment executed correctly, you can see that a third-party resource named “cluster.etcd.coreos.com” is registered with your Kubernetes environment.

Now that you deployed the third-party resource that performs the Operator functions for etcd, you can now create a cluster that leverages Operator. Create this yaml file in the etcd directory:

example-etcd-cluster.yaml

The syntax for deploying a cluster is quite simple and straightforward. In the above yaml, the deployment will be of type Cluster, and have an initial size of 5, and leverage the etcd Operator from CoreOS.

Now that you defined the cluster, you can execute the kubectl command with the apply option to deploy it. By using the apply option, you can modify this yaml file to make changes to the cluster such as scaling up or scaling down the instances and simply rerun the command to update the cluster.

After successfully running this command, check that the pods have completely initiated and are in running state (kubectl get pods). For the purpose of this exercise, the most interesting service is the “example-client” service, which connects your Node.js Application to the etcd service. You defined this service name in your Node.js application.

Deploy Node.js Application to Kubernetes

Now that the etcd service deployed, deploy the Node.js application. To get started, the first step is to package the Node.js app as a container.

The configuration below packages the Node.js app as a basic Docker container and exposed the port 9080, which the app uses for listening for HTTP requests. Within the container are the dependencies that were added as part of the npm install and, of course, the Express app. In the root of your application workspace (peer to the server.js and package.json files), create a Docker file that contains the following text:

Dockerfile

Great! Now that you have the Dockerfile, you need to build the image and deploy it to MiniKube for testing. Using the set of commands below, I will show you how deploy the Docker container and to verify that the deployment was successful.

The application works, and the simple scenario is complete. However, none of the transport communications are secured using TLS/SSL. Next, you refactor this application to secure the HTTP transport communication.

Security beyond your application tier

Software architecture typically refers to the bigger structures of a software system, and it deals with how multiple software processes cooperate to carry out their tasks. Web application design patterns often depict a multi-tier architecture that consists of components, such as web servers and application services, and persistent services, such as databases and caching. In this example, we have two communication channels to secure. The most obvious channel is the Node.js HTTP endpoints that the application exposes as RESTful APIs. The less obvious one, and the focus of this article, is the transport communication between the application and the etcd service.

One major area of annoyance for developers involves creating and managing SSL certificates for their applications and web servers. The SSL certificate management process often involves multiple parties, and in the public cloud, this problem is exacerbated because your applications often share the same environment across a set of tenants. Data on the wire becomes one of the most severe security exposures, and companies have low tolerance for this type of security escape. Despite these concerns, there are still services that do not support native transport level security by default, and developers often must create their own solutions to ensure data security, such as creating application level data encryption before transmitting it on the wire.

Applying Secure Engineering with Istio

A few months ago, I was developing a prototype for microservices best practices, and I needed to secure transport level communication between my microservices and back end systems. At that time, I came across Istio and its vast set of capabilities around intelligent routing, versioning of APIs, resiliency against service failures, and security. For a great overview of Istio, I strongly suggest you read their overview document that explains the capabilities in great detail.

Reviewing all of Istio’s capabilities is beyond the scope of a single article. In this article, I use both Istio’s side car approach for pod to pod communication and its Ingress capabilities acting as an HTTP gateway to your application.

Installing Istio

The documentation for installing Istio is also very good. Follow it to install Istio. In step 5, ensure that you enable Mutual TLS by running this command:

Integrating Istio into my application

When you use Istio, you inject its services into existing Kubernetes YAML deployment files. After you install Istio, you can access the istoctl CLI tool from your development environment. This CLI works in concert with kubectl and becomes part of your toolbox for deploying applications to Kubernetes.

You must configure your Kubernetes pod deployment to be compatible with the services that Istio can inject. Defining a service that supports HTTP transport is an important Istio injection trigger. In addition to the deployment YAML we describe in the next section, modify the Node.js application to update the ipAddress parameter, which maps to the service name that we will be configured in the YAML file.

On line 10 of the server.js file, replace “const ipAddress = “example-client” // Kubernetes service name for etcd” with “const ipAddress = “etcd-service” // Kubernetes service name for etcd” .

Next, recreate the Docker image and push version number “v2” to the registry on DockerHub. Replace “todkap” with your Docker user name.

Deploying new workload with Istio

To simplify integration with Istio, I created a new deployment yaml for my application. This all-in-one yaml file defines not only the Node.js application deployment and service and the etcd deployment and service but also an Ingress rule that acts as a proxy to the application. For this step, create a directory in the workspace called istio_deployment, and then create an all-in-one-deployment.yaml file that contains this code in the directory:

all-in-one-deployment.yaml

Now that you created the yaml file, review the impact that injecting Istio has on the deployment artifact.

Using the Istio CLI, run the following command:

If the injection command succeeded, you can see the Istio sidecar injected as an annotation similar to this:

You will find similar annotations in both your Node.js application and the etcd service. These sidecars act like traffic cops and route traffic securely between your pods. The sidecars are installed as init containers that are co-located with your deployments in each pod and act as secure proxies. In this scenario, the traffic flows like this: the Node.js application uses HTTP to communicate with the Node.js sidecar, which uses HTTPs to communicate with the etcd sidecar, which uses HTTPs to communicate with etcd. This method provides a secure transport for pod to pod communication without modifying the application logic.

Now that you have reviewed the code injection, deploy the application by using the kubectl command line:

This command deploys the set of services to Kubernetes with the Istio capabilities running as a sidecar.

Verify that your pods are running and ready for validation. Use kubectl to get the list of pods and check their status to ensure that they are active.

Validating the integration

Now that the code is deployed and running, test that the deployment was successful. Modify the original test commands to use the Ingress to obtain the NodePort for the Ingress service.

Now that you validated the flow, take a closer look at the process. The Istio proxy automatically creates audit logs that display its HTTP access data. To verify that the sidecar applied a proxy to the HTTP calls between the application and etcd, locate the pairs of API calls between the client and server in the logs.

First, obtain the CLIENT and SERVER pod names using the Kubernetes CLI and store them in variables. Run these commands:

The search returns a set of log entries. Review the contents of the most recent entry, which resembles this output:

Search the server logs for the same API request id that was seen in the CLIENT logs. This correlation id is a key component of Istio and is a great way to track requests throughout the Kubernetes cluster.

You have now verified the end to end flow using Istio!

Securing the front door

So far, we have focused on only the internal communication inside of our Kubernetes cluster. While this is great, we still have not addressed the lack of transport security between our client, say a browser, and the Ingress, the application’s front door. The great news is that one of my colleagues has written an article that demonstrates creating automated digital certificates by using Let’s Encrypt. In this article, Sachin demonstrates how annotations can be applied to the deployment of your Ingress to automatically create self-signed certificates for your domain. You can avoid the typical manual process for creating your own custom SSL certificate to easily secure your application’s front door.

Conclusion

In this article, I demonstrated a novel way to leverage the amazing security capabilities of Istio for platform services. If you are interested in trying out another use case for deploying Istio in your Kubernetes deployments, check out the Book Info sample application. This sample app applies some of same security concepts to microservices-based applications.

Originally published at developer.ibm.com.

--

--

Todd Kaplinger
IBM Cloud

Vice President SW Engineering — Chief Architect, Retail Solutions@NCR Voyix. The opinions expressed here are my own. Follow me on Twitter @todkap