Ingress Controller with Kong, Alternative way Ingress instead Nginx
Setup Kong as ingress Controller
What Is Kong
Kong is one of API Gateway platform for managing API gateway and Microservice mesh. Kong offer service as open-source (Community Edition) or Enterprise. With Kong we can manage communication between client and Microservice using API. Not only about API gateway and microservice mesh, Kong can be used as Ingress Controller in Kubernetes cluster. It we’ll discuss in this article.
What is Ingress
“Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.”
Ingress with Kong
Why use Kong as ingress controller?
- Popular cloud-native API Gateway
- Open sourced in 2015; Apache-2.0
- Built in top of the official NGINX k8s ingress controller
- Kong 2.0 (latest 2.1.3)
- Enhanced API management using plugin
- Health Checking And Load Balancing
- Authentication
- Etc
How?
Kong Ingress Controller configures Kong using Ingress resources created inside a Kubernetes cluster.
Kong Ingress Controller is made up of two components:
- Kong, the core proxy that handles all the traffic
- Controller, a process that syncs the configuration from Kubernetes to Kong
Kong Ingress Controller performs more than just proxying the traffic coming into a Kubernetes cluster. It is possible to configure plugins, load balancing, health checking and leverage all that Kong offers in a standalone installation.
Kong is designed around an extensible plugin architecture and comes with a wide variety of plugins already bundled inside it. These plugins can be used to modify the request/response or impose restrictions on the traffic. You can link Kongplugin to Ingress, service or Kongconsumer.
Demo
Deployment app on Kubernetes using echo server, for manifest you can find at here. And deployment Kong as Ingress Controller Using Helm 3. Make sure that you have installed helm3 on your bastion/host.
Installation
Create Namespace
kubectl create namespace kong
Install Kong using Helm Chart
helm repo add kong https://charts.konghq.com
helm repo updatehelm install kong/kong --namespace kong --generate-name --set ingressController.installCRDs=false
Export IP_PROXY as env variable
# Get service name for next action
kubectl get svc -n kong export PROXY_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" service -n kong <nama service>)curl -i $PROXY_IP
Deploy Sample App
Deploy echo server
kubectl create namespace echo-server
kubectl apply -n echo-server -f https://bit.ly/echo-service
Create Basic Proxy
echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo
annotations:
kubernetes.io/ingress.class: kong
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: echo
servicePort: 80
" | kubectl apply -n echo-server -f -
Test Proxy
curl -i $PROXY_IP/foo
Kong Plugin
echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: request-id
config:
header_name: my-request-id
plugin: correlation-id
" | kubectl apply -n echo-server -f -
Create Ingress Rule with Plugin
echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-example-com
annotations:
konghq.com/plugins: request-id
kubernetes.io/ingress.class: kong
spec:
rules:
- host: example.com
http:
paths:
- path: /bar
backend:
serviceName: echo
servicePort: 80
" | kubectl apply -n echo-server -f -
Test
curl -i -H "Host: example.com" $PROXY_IP/bar/sample
Rate Limit
echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: rl-by-ip
config:
minute: 5
limit_by: ip
policy: local
plugin: rate-limiting
" | kubectl apply -n echo-server -f -kubectl patch svc -n echo-server echo \
-p '{"metadata":{"annotations":{"konghq.com/plugins": "rl-by-ip\n"}}}'
Test
curl -I $PROXY_IP/foo
curl -I -H "Host: example.com" $PROXY_IP/bar/sample
What’s Next?
In the next article we’ll share about implementation Kong as API Gateway Solution for gateway communication between end-user such as Frontend App or Mobile App and Microservice Mesh using API Centric. Stay tune …
By Sulaiman, Ops Team Btech