Upgrade Istio on IBM Cloud Private

Morven Cao
IBM Cloud
Published in
7 min readJul 31, 2018

With the latest release of ibm-istio chart 0.8.0, there are many new features in addition to performance improvements. Some of you can’t wait to upgrade your existing service mesh to the new version. One of the reasons may be that the last version(0.7.1) of Istio on IBM Cloud Private has been introduced as technical preview and isn’t considered stable enough to apply to production environment.

In this blog, I’ll show you how to upgrade your Istio service mesh to latest version(0.8.0) on IBM Cloud Private. If you haven’t play with Istio on IBM Cloud Private, you may want to read the blog: Get Started with istio 0.8.0 on IBM Cloud Private.

In summary, the upgrade process consists of upgrading an existing Istio deployment(including control plane and data plane) and migrating existing configuration and API schemas to new version. Currently zero downtime upgrading can’t be implemented since there is a helm upgrade issue which will cause upgrading Istio from 0.7.1 to 0.8.0 fail. We have to workaround this by manually deleting Istio 0.7.1 and then re-installing 0.8.0. And because of the upgrade compromise, the route rules for Istio 0.7.1 will be lost after the installation of Istio 0.8.0 is finished.

The following steps assume that the Istio components are installed and upgraded in the istio-system namespace.

Control Plane Upgrade

There are some breaking changes to the control plane when moving from ibm-istio 0.7.1 to 0.8.0:

  • Introduced new component gateway for ingress/egress to replace Istio ingress and support new traffic management API.
  • Split the policy check and telemetry functions of istio-mixer into istio-policy and istio-mixer.
  • Renamed security component from istio-CA to citadel.
  • Introduced new tracing system jaeger to replace zipkin.

All of these changes above as well as the helm upgrade issue prevent us from upgrading istio control plane directly with zero downtime. Currently you need to manually delete 0.7.1 and then re-install 0.8.0 either on IBM Cloud Private management console or with helm CLI. I'll cover both of these two methods in the follow steps.

Upgrade Istio on IBM Cloud Private management console

1. Log in to the IBM Cloud Private management console and search istio from Workloads -> Helm Releases.

Istio release

2. Click on ACTION of the target chart release and select Delete.

delete Istio release

3. Click on Remove button on the popup dialog, the previous install of Istio will be delete shortly.

remove Istio release

4. Clean up the old custom resource definitions for pilot.

# kubectl delete crd enduserauthenticationpolicyspecbindings.config.istio.io
# kubectl delete crd enduserauthenticationpolicyspecs.config.istio.io
# kubectl delete crd gateways.networking.istio.io
# kubectl delete crd httpapispecbindings.config.istio.io
# kubectl delete crd httpapispecs.config.istio.io
# kubectl delete crd policies.authentication.istio.io
# kubectl delete crd quotaspecbindings.config.istio.io
# kubectl delete crd quotaspecs.config.istio.io

5. Search foristio from Catalog, you will see the latest ibm-istio chart.

Istio chart

6. Click on the ibm-istio chart and you will be navigated to chart details page.

ibm-istio chart

7. Click on the Configure button and you can customized your Istio release to your preferences.

config Istio release

Note: If using Kubernetes versions prior to 1.9, you should disable sidecarInjectorWebhook as it requires Kubernetes version 1.9+.

8. Click on the Install button and new istio chart will be deployed shortly.

Upgrade Istio with Helm CLI

1. Setting up the Helm CLI.

2. Get and delete the existing Istio release, for example:

root@master:~# helm list --tls | grep istio
istio 1 Thu Jul 19 05:20:28 2018 DEPLOYED ibm-istio-0.7.1 istio-system
root@master:~# helm delete istio --purge --tls
release "istio" deleted

3. Clean up the old custom resource definitions for pilot.

# kubectl delete crd enduserauthenticationpolicyspecbindings.config.istio.io
# kubectl delete crd enduserauthenticationpolicyspecs.config.istio.io
# kubectl delete crd gateways.networking.istio.io
# kubectl delete crd httpapispecbindings.config.istio.io
# kubectl delete crd httpapispecs.config.istio.io
# kubectl delete crd policies.authentication.istio.io
# kubectl delete crd quotaspecbindings.config.istio.io
# kubectl delete crd quotaspecs.config.istio.io

4. Get the ibm-istio 0.8.0 chart and deploy the new chart to your preferences:

# curl -LO https://github.com/IBM/charts/blob/master/repo/stable/ibm-istio-0.8.0.tgz
# helm install ibm-istio-0.8.0.tgz --name istio --namespace istio-system --set grafana.enabled=true --set tracing.enabled=true --tls

Note: If using Kubernetes versions prior to 1.9, you should add --set sidecarInjectorWebhook.enabled=false as sidecarInjectorWebhook requires Kubernetes version 1.9+.

5. After the chart is deployed successfully, verify all pods for Istio are running:

# kubectl -n istio-system get pods
NAME READY STATUS RESTARTS AGE
istio-citadel-596bbf998d-4sqrg 1/1 Running 0 5m istio-egressgateway-598bdc99fc-csgb4 1/1 Running 0 5m
istio-egressgateway-598bdc99fc-csgb4 1/1 Running 0 5m
istio-ingress-774847465-482rv 1/1 Running 0 5m
istio-ingressgateway-67b868865f-79k87 1/1 Running 0 5m
istio-pilot-7547959877-l44k7 2/2 Running 0 5m
istio-policy-96cd8cfd9-4qg6f 2/2 Running 0 5m
istio-sidecar-injector-76fc7df859-lkndd 1/1 Running 0 5m
istio-statsd-prom-bridge-6b99f48b49-bkwwv 1/1 Running 0 5m
istio-telemetry-6959fd8b88-7wcbq 2/2 Running 0 5m
prometheus-c865b55d9-9scfj 1/1 Running 0 5m

Until now, your Istio control plane should be upgraded to the new version. If there is any critical issue with the new control plane, you can rollback the changes by deleting the new chart release and reinstalling the Istio chart of old version.

Data Plane Upgrade

After the control plane is upgraded, the applications are still running with envoy sidecar of older version. To upgrade the data plane, you need to re-inject the sidecar. Just like refresh installation of Istio, there are two methods to upgrade the sidecar container: automatic sidecar injection and manual sidecar injection.

Automatical Sidecar Injection

For automatic sidecar injection, we need a rolling update for all application pods, so that the new version of sidecar will be automatically re-injected. One trick to achieve this is by updating terminationGracePeriodSeconds field of existing deployment. For simplicity's sake, you can use the bash script to automatic this process.

1. Get the rolling update script and add execute permission.

# curl -LO https://gist.githubusercontent.com/jmound/ff6fa539385d1a057c82fa9fa739492e/raw/caed819555ecbaa8841c4dce07a367be6f639cc8/refresh.sh chmox +x refresh.sh
# chmod +x refresh.sh

2. Set the environment variable NAMESPACE with the namespace where existing applications are deployed to and the trigger the rolling update.

# export NAMESPACE=${YOUR_NAMESPACE}
# ./refresh.sh

3. Verify the sidecar for your applications on ${YOUR_NAMESPACE} namespace is upgraded to new version by describing the pod.

# kubectl get pods -n ${YOUR_NAMESPACE} -o jsonpath={.items[*].spec.containers[*].image}

You are supposed to see that all sidecar containers are listed with version of image: ibmcom/istio-proxyv2:0.8.0 after all the old pods are terminated.

Manual Sidecar Injection

1. For manual sidecar injection, you can upgrade the sidecar for your existing applications by directly executing:

# kubectl replace -f <(istioctl kube-inject -f $ORIGINAL_DEPLOYMENT_YAML)

2. If the sidecar was previously injected with some customized inject config map, you will need to change the version tag in the configMap to the new version and re-inject the sidecar as follows:

# kubectl replace -f <(istioctl kube-inject --injectConfigMapName ${INJECT_CONFIG_MAP} -f ${ORIGINAL_DEPLOYMENT_YAML})

3. Verify the sidecar for your applications on ${YOUR_NAMESPACE} namespace is upgraded to new version by describing the pod.

# kubectl get pods -n ${YOUR_NAMESPACE} -o jsonpath={.items[*].spec.containers[*].image}

You will see that all sidecar containers are listed with new version of image: ibmcom/istio-proxyv2:0.8.0 after all the old pods are terminated.

Migrating to the new networking APIs

Since you’ve upgraded the control plane and sidecar, you can convert your existing ingress or route rules to new network rules following the new traffic management APIs. As mentioned at the beginning of the blog, the route rules for Istio 0.7.1 will be lost after Istio control plane upgrading to Istio 0.8.0. Currently you need to create new traffic rules based on the ingress and route rules.

Then delete your existing ingress and create the new traffic rules.

# kubectl delete -n ${YOUR_NAMESPACE} ingress ${EXISTING_INGRESS}
# kubectl create -n ${YOUR_NAMESPACE} -f NEW_TRAFFIC_RULES.yaml

Migrating per-service mutual TLS enablement

If you’re using service annotations to override global mutual TLS enablement for a service, you need to replace it with authentication policy and destination rules.

For example, if you have installed Istio with mTLS enabled, and disable it explicitly for service foo with annotation like below:

kind: Service
metadata:
name: foo
namespace: default
annotations:
auth.istio.io/8000: NONE

You need to replace this with the authentication policy and destination rule:

apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "disable-mTLS-foo"
namespace: default
spec:
targets:
- name: foo
ports:
- number: 8000
peers:
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "disable-mTLS-foo"
namespace: "default"
spec:
host: "foo"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8000
tls:
mode: DISABLE

However, if you already have destination rules for foo service, you need to edit that rule instead of creating a new one. And if foo isn't injected the sidecar, you just need to add destination rule.

Migrating mtls_excluded_services config to destination rules

If you installed Istio with mTLS enabled, and add services explicitly to mesh config mtls_excluded_services to disable mTLS for connecting to these services (e.g kubernetes API server), you need to replace this by adding a destination rule, for example:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "kubernetes-master"
namespace: "default"
spec:
host: "kubernetes.default.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE

Summary

In this blog, I have illustrated how to upgrade Istio from 0.7.1 to 0.8.0 on IBM Cloud Private. Due to the helm CLI upgrade issue and many breaking changes from Istio 0.7.1 to 0.8.0, we need to make some compromises. The good thing is that Istio 0.8.0 is a major release for Istio on the road to 1.0 and it has a great many new features and architectural improvements. Besides, the upgrade to 1.0 should be easy to accomplish for small changes will be made from Istio 0.8.0 to 1.0.

--

--

Morven Cao
IBM Cloud

Red Hatter, programmer, strong advocate and believer of OSS and passionate about reading, thinking and the possibility of time travel.