How to setup Istio in Eks

Sreejith Sreejayan
4 min readAug 25, 2021

--

It’s simple to set up Istio in Eks. I’ll be using Istio 1.11.1 on top of Eks 1.21 in this example.

ALL files can be cloned from the repo git@github.com:sreejith421/istio.git or https://github.com/sreejith421/istio

Please keep in mind that The loadbalancer will be AWS ALB, and all traffic will be routed through the Istio ingress gateway.

step1 Using the curl -L command, download all of the essential istio binary files to a local laptop.

curl -L https://istio.io/downloadIstio | sh -

This will download the most recent version of Istio from the Istio website, as 1.11.1 was the most recent release at the time of writing.

setup 2 cd istio-1.11.1 in the terminal (will change depends on version) Run the command export to set the path for the istio binary.

export PATH=$PWD/bin:$PATH

Setup 3 : I’m using istioctl to install Istio on a Kubernetes cluster here. Backend istioctl is a go template based on helm. Assuming you have terminal access to the Kubernetes cluster, use the following command.

istioctl install - set profile=default - set values.gateways.istio-ingressgateway.type=NodePort - set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY - set meshConfig.accessLogFile=/dev/stdout -y

a brief description of the instructions listed above We’re utilising the default profile, which is suitable for use in a production environment. meshConfig.outboundTrafficPolicy.mode=ALLOW ANY We don’t need to implement this by default because it allows all egress traffic from the pod to the outside. Install an egress gateway and add a service entry to enable specified domains if you want to limit egress access. setting access logs on sidecar proxy with meshConfig.accessLogFile=/dev/stdout -y.

values.gateways.istio-ingressgateway.type=NodePort This is critical because by default, the istio will expose to a load balancer, which is a standard load balancer in the case of AWS it’s Classic Load balancer , and instead of building a ALB , we are modifying it to expose to Nodeport so we can use ALB here .

If everything looks good, wait a few moments after running the command and check kubectl get pods -n istio-system for two pods. 1 is an ingress gateway, while the other is an istiod. Check both pods’ logs to ensure everything is in order.

Install add-on tools for monitoring using kubectl apply -f samples/addons and enable sidecar on namespace:

kubectl label namespace dev istio-injection=enabled and restart the pods.

Before building an aws load balancer, the next step is to create an aws ingress controller . Create and configure an Amazon Web Services ingress controller with the relevant permissions. https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-setup/ is the link for setting up an AWS ingress controller. If you already have an alb ingress controller, you can skip this step.

Next is yaml file for creating aws alb which transfer all the incoming traffic to istio ingress gateway which we exposed early via Nodeport

please copy paste and make the appropriate changes depending on your env . Please correct the yaml spacing and this should be in istio-namespace

please use the repo git@github.com:sreejith421/istio.git

link : https://github.com/sreejith421/istio/blob/main/alb-in-istio-namespace.yaml
kubectl apply -f alb-in-istio-namespace.yaml

So, after some time has passed, you should notice that a new alb has been formed, and in Route 53, please update the domain to use the newly created alb.

The next step is to set up the gateway and virtual service, as well as mutual TLS. Please note that if you have many namespaces, such as dev, stag, and qa, you will need to setup a gateway, virtual service, and mutual TLS for each namespace.

establishing a gateway, establishing a virtual service, and establishing mutual TLS Simply copy and paste each into a separate yaml file. Please adjust the yaml file spacing in the workflow below.

Your request will be received by the AWS load balancer, which will then transmit it to istio-ingress-gateway, which is hosted on istio-namespace, which will then redirect it to namespace gateway, which will then forward it to virtual service. If you’re doing canary deployments, you’ll need to use a destination rule, which isn’t covered here.

Gateway config :

link : https://github.com/sreejith421/istio/blob/main/gateway-dev.yaml
kubectl apply -f gateway-dev.yaml

If you’re exposed to more than one API, you’ll need to add all of them and point to the correct svc. I’ll show you an example with three APIs. Please make the yaml spacing corrections.

VirtualServive:

Link: https://github.com/sreejith421/istio/blob/main/virtual-service-dev.yaml
kubectl apply -f virtual-service-dev.yaml

Finally, Strict MTLS on namespace-based mtls is not enabled by default for pods. Use all of the namespaces that are enabled for istio, such as dev, stag, qa, and prod. This rule does not apply to the istio-system namespace. Please fix the yaml mistakes.

Link: https://github.com/sreejith421/istio/blob/main/mtls-dev.yaml
kubectl apply -f mtls-dev.yaml

“Mintmesh’s flagship product, RUDY, is an artificial intelligence based digital platform which has the capacity to read, extract and examine statistics & data from technical bid files.
Powered by engineering language processing, allows execution and control of the technical bid tabulation process as a single system of records. It standardizes processes, reduces project over-runs, reduces compliance risk and improves cycle time from material requisition to bid evaluation.”

Sounds exciting right, Click here to know more

Want to join the revolution in the EPC sector, click here to know about current open positions

--

--