Published in


How to bind hostnames in Gateway/VirtualService Istio resources with NodePort exposed Istio Ingress

Istio allows you to bind a hostname to a specific Gateway or VirtualService resource using the hosts' field. This is extremely helpful when you like to use different hostnames instead of paths to expose your applications. An example:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
name: my-gateway
namespace: my-namespace
app: my-gateway-controller
- port:
number: 80
name: http
protocol: HTTP
- my-service.example.com

It’s nothing special I would have to blog about, but there are some pitfalls in on-premises environments I would like to talk about.

By default, Istio uses the Kubernetes Loadbalancer service to expose the Istio Ingress proxy. The Loadbalancer service is mostly known in Cloud environments. Of course, you can also implement in your on-premises environment with projects like MetalLB but it will need additional requirements and configurations. Therefore, Istio also provides support to expose the Istio Ingress as NodePort Service. The Kubernetes NodePort service uses high ports to expose a service on every node in the cluster. So far, so good.

This leads to the fact that your app will be available on something like http://my-node.domain.example:31380. As mentioned above a common solution is to use a specific hostname for your application like my-app.domain.example and map a Gateway or VirtualService resources on that name. And exactly this will lead you into some issues.

The hosts' field value of an Istio resource is compared to the host header of the sent request. Based on the above example the host header would be my-node.domain.example:31380. Because this value needs to equal the Istio resource configuration it would need to be configured like this:

- my-service.example.com:31380

Unfortunately, this configuration will lead to an error because it’s not intended to include ports in the hosts' field.

You might start thinking about just removing the port from the configuration might be the answer to this problem. But unfortunately, it’s not the case because then the host header would not map to the configuration anymore and requests wouldn’t be routed correctly.

The solution for this is pretty easy: You just need to make sure your application is available on the default HTTP (80) and/or HTTPS (443) ports. The easiest way for this is to use a load balancer or proxy in front of your cluster — you might need anyway to provide resilience.

When searching for this issue you will find many posts and Github issues related to this but most of them provide no solution either an explanation. Therefore I decided to write this post to maybe help someone out there.



Stories related to Kubernetes, CloudNative & DevOps topics by Nico Meisenzahl... 01001101? First char of my surname.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store