Istio and the path to Production.

Matt Law
4 min readAug 17, 2018

--

On the 31st July it was announced that ISTIO was production ready. ISTIO shows a-lot of promise for providing a service mesh for Kubernetes based solutions. The ISTIO documentation provides a great introduction into its feature set, and how to get a sample bookapp working.

In my current environment, we currently use traefik for service discovery but its limited to HTTP/HTTPS traffic only, and cant provide TCP service management. We also have decided to use a domain per team, mapped into a namespace. I’ll try to describe how ISTIO provided a suitable solution for us.

Pre-requisites:

We need a few things before we can start. A platform, in our case GKE, an external IP, and a bit of background knowledge.

Platform:

In this example Im using a GKE environment. Instructions for setting that up are here:

External IP:

For our GKE environment, I ran the following command to find the EXTERNAL-IP.

$ kubectl get svc istio-ingressgateway -n istio-systemNAME                   TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                                      AGE istio-ingressgateway   LoadBalancer   10.59.251.109   35.194.26.85   80:31380/TCP,443:31390/TCP,31400:31400/TCP   6m

Note down the IP address (EXTERNAL-IP) and port assigned

Reading:

Please have a read of the ISTIO introduction documents, they are a very good way to understand the building blocks. I suggest starting here.

Option 1: Single ISTIO Gateway Setup:

Lets start simple, we are going to use a single ISTIO gateway for all traffic for all teams/namespaces.

This is “production”, so lets start in a secure manner. We will enable TLS from the outset.

The ISTIO gateways job is to provide a mapping to/from the Ingress, as well as state what FQDN’s it should be responsible for. In my scenario, this gateway will be responsible for all domains for all namespaces in the cluster. In our use case this is easy to achieve with a wildcard entry (In the example below, this is *.domain.com). For my environment, all of our namespaces have a subdomain off the TLD, again using the domain.com example, we have team1.domain.com and team2.domain.com. These domains will be mapped into each teams namespace, team1 and team2 respectively. You could add as many domains as you need to here, depending on how your environment is setup.

For each namespace we have, we need to create a one or more virtualservice definitions in to provide the mapping to the specific kubernetes pod

To summarise, for the following example we have:

  • a TLD for the entire cluster (domain.com)
  • 2 namespaces (team1 & team2)
  • subdomains based on the TLD for each namespace. (team1.domain.com & team2.domain.com)

Namespace:

Your namespace will need to be tagged to have sidecar injection setup automatically for any new pods you create. This enables mTLS between pods automatically, and we like this!

kubectl label namespace team1 istio-injection=enabledkubectl label namespace team2 istio-injection=enabled

Gateway:

First, lets create the gateway, we are going to have this live in the istio-system namespace (it could really live anywhere).

From the outset we are going to secure our gateway with HTTPS. We need to set up a secret (Im hoping you have a cert/key to use if not generate one). More info is here.

kubectl create -n istio-system secret tls istio-ingressgateway-certs --key domain.key --cert domain.pem

This secret is used as below.

Create a new yaml file called global-gateway.yaml

Lets apply that, with the following command into the istio-system-namespace.

kubectl apply -f global-gateway.yaml --namespace=istio-system

Check that its there

istioctl get gateways --all-namespacesGATEWAY NAME              HOSTS             NAMESPACE      AGEglobal-gateway            *.domain.com       istio-system   1d

VirtualService

Lets now create the virtualservice in the namespace in question. Every new pod/service that is created must be added to this file.

Note:

  • The gateways definition must be the FQDN as the gateway lives in another namespace.
  • The hosts definition must include only the domain this namespace/team is responsible for, adding a wildcard e.g. *.domain.com here could suck up all traffic!
  • Everything under ‘http’ in the yaml example below, provides the definition to link this virtualservice to your pod. You will need to know what you are providing access to in order to create this file. I know Im going to create 2 services, (service1 & service2) whos ports are 9080 & 80 respectively.
  • At time of writing, each you cannot have a single domain in more than 1 VirtualService. You can however have more than 1 VirtualService per gateway as long as the domains are different.

Create a file called team1-vs.yaml add the following to it.

Lets apply that:

kubectl apply -f team1-vs.yaml

Lets check that its there:

istioctl get virtualservices --all-namespacesVIRTUAL-SERVICE NAME                   GATEWAYS                                        HOSTS               #HTTP     #TCP      NAMESPACE     AGEteam1                                  global-gateway.istio-system.svc.cluster.local   team1.domain.com    2         0         team1         1dteam2                                  global-gateway.istio-system.svc.cluster.local   team2.domain.com    1         0         team2         1d

We are all ready to go!

Deploy a Service

Deploying your solution is fairly simple. At minimum define a Kubernetes deployment and Kubernetes Service. Remember we need to have mapped the right ports in the use case below with our Virtual Service above.

e.g. here is the an example bookapp sample tailored to our use case. Its going to deploy service1 on port 9080

Lets apply that:

kubectl apply -f service1-example.yaml --namespace=team1

Traffic to your pods is then routed to your pod via the definitions in VirtualService. Your pretty much done!

DNS Entry

Way back at the start we determined what our external IP address is. You’ll need to add a DNS entry for team1.domain.com and team2.domain.com to point to that external IP.

You could map *.domain.com to the IP, or individual entries depending on your use case.

Next Steps:

Part 2 is now published describing how to provide access to the dashboards that come with the installation. I intend to write a little more about a multi-gateway solution and troubleshooting soon - stay tuned!

--

--