Deploying Kong Ingress Controller and APIs in Kubernetes(Minikube & AWS)

Arun S
12 min readMay 31, 2018

--

Photo by chuttersnap on Unsplash

1.Overview

This is a technical article focusing on deploying and running Kong API Gateway with few sample Microservices APIs(REST) as docker containers in Kubernetes cluster and demonstrate leveraging all good production class features from Kubernetes container platform such as utilising the resources in optimised way, automated deployment, scaling and management of container applications.

2.Kubernetes setup

Kubernetes is an open source project and container orchestration and management platform which can be deployed in any public, private cloud or bare metal servers and it has very good community support and tools available.

2.1 On Local Dev machine

This can be easily installed in developer machine using minikube, Please follow the instructions provided there.

After minikube successful installation, we can use these basic commands to bring up a Kubernetes cluster locally, understand the status and also bring up the Kubernetes dashboard UI.

Another command line tool very frequently used will be kubectl to interact with Kubernetes cluster to fetch resources information or manipulate them. This tool would have been installed as a requirement already in this case. Some command interactions shown below.

$ ./kubectl version$ ./kubectl get deployments$ ./kubectl get services$ ./kubectl get pods

2.2 On AWS with KOPS and Terraform

Installation of Kubernetes cluster in AWS is little complex and there are many tools available to tackle this. We will be using KOPS short for Kubernetes Operations tool which needs AWS-Cli tool as a pre-requisite and pre-configured with AWS credentials(API key and secret key) and AWS zones where Kubernetes cluster intended to be deployed.

https://dzone.com/articles/how-to-create-a-kubernetes-cluster-on-aws-in-few-m

The above link explains this step by step and in a nutshell, these are the key steps.

  1. install aws-cli and run ‘aws configure’
  2. install KOPS command line tool
  3. create a AWS S3 bucket where the cluster state is stored by kops
  4. create a kubernetes cluster with a cluster configuration preferred, for this exercise a sample kubernetes cluster with 1 master (eu-west-1a) , 2 worker nodes (eu-west-1a, eu-west-1b) and a elastic loadbalancer is created using kops tool.
  5. deploy Kubernetes dashboard into the cluster.
[ec2-user@ip-xxxxx ~]$ kops validate cluster
Validating cluster k8slab.k8s.local
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master m3.medium 1 1 eu-west-1a
nodes Node t2.micro 2 2 eu-west-1a,eu-west-1b
NODE STATUS
NAME ROLE READY
ip-xxxxxxx.eu-west-1.compute.internal node True
ip-xxxxxxxxxxx.eu-west-1.compute.internal master True
ip-xxxxxxxxxxxx.eu-west-1.compute.internal node True
Your cluster k8slab.k8s.local is ready

Kops tools supports terraform by generating terraform configurations instead of directly creating/updating/deleting clusters in AWS or any other cloud. It is simple to output a terraform configuration with kops as shown below.

$ kops create cluster \
--name=kubernetes.mydomain.com \
--state=s3://mycompany.kubernetes \
--dns-zone=kubernetes.mydomain.com \
[... your other options ...]
--out=. \
--target=terraform
$ terraform plan
$ terraform apply

3. Deployments in Kubernetes

Deployment to Kubernetes discussed in detail here. We will be using Kubectl command line tool and deployment manifest files with extentions .yaml to create deployments targeting the Kubernetes clusters created in one of the ways described in previous section.

The assumption is sample microservices APIs are already packaged as docker containers and available from public docker repositories and referenced in those manifest files and the same applies for Kong deployment as well.

3.1 Kong

Kong can be deployed in 2 ways into Kubernetes cluster, one way is described here. But this involves manual integration of services running on Kubernetes.

We will see second way in detail, deploying Kubernetes friendly Kong ingress controller recently introduced by people behind Kong which makes Kong to automatically configure itself to serve the traffic as the new applications are deployed and the services for them are created in Kubernetes. This takes away manual integration steps of new Kubernetes services we would have to go through that are associated with first way of deploying Kong on Kubernetes.

Installing Kong ingress controller is very simple and done by just running a single command as below.

$ curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml \
| kubectl create -f -

It takes about 5 minutes to install all the necessary components specified in the manifest file and bring up Kong. We can try to list all the resources associated with Kong deployment by this following command.

$  ./kubectl get all -n kong

It will result in this.

We can obtain Kong admin API IP address and port number by the following commands in case of minikube Kubernetes cluster.

$  export KONG_ADMIN_PORT=$(./minikube service -n kong kong-ingress-controller --url --format "{{ .Port }}")$  export KONG_ADMIN_IP=$(./minikube service   -n kong kong-ingress-controller --url --format "{{ .IP }}")

Kong Admin API running in Kubernetes

3.2 APIs

We will be deploying dummy APIs and control the traffic to those APIs from Kong Api gateway already running in Kubernetes cluster.

3.2.1 Echo Service API

There is a echo service Api provided by Kong team that we can use to understand the requests metadata hitting that API via Kong when deployed in our Kubernetes cluster.

Same deployment method to deploy Kong will be used to deploy this Api as well. Lets run the following command.

$  curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/manifests/dummy-application.yaml \
| kubectl create -f -

if we run the ‘get pods’ and ‘get services’ command using Kubectl tool, we will find the pods running echo service containers in Kubernetes.

$  kubectl get services$  kubectl get pods

This will result in the below showing 5 pods running and one service load balancing those pods.

yes, http-svc service is scaled up to run 5 pods by this command.

kubectl scale deployment http-svc --replicas=5

This API will echo request information details as response shown below.

Hostname: http-svc-7dd9588c5-gmbvhPod Information:
node name: minikube
pod name: http-svc-7dd9588c5-gmbvh
pod namespace: default
pod IP: 172.17.0.7
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=127.0.0.1
method=GET
real path=/
query=
request_version=1.1
request_uri=http://localhost:8080/
Request Headers:
accept=*/*
host=localhost:8080
user-agent=curl/7.47.0
Request Body:
-no body in request-

3.2.2 Anagram Service API

Similar to Echo Service Api, this Api will also be deployed by just running the command below.

More details about what this Api does is available from this Github public repo — https://github.com/arunk16/anagram-moj

(Ps.. This is one of my experimental API that I created to play with Heroku cloud application platform.)

Here is the interaction with the API running in Heroku platform.

$  curl -s https://anagram-moj.herokuapp.com/pictures | json_pp
{
"pictures" : [
"crepitus",
"cuprites",
"pictures",
"piecrust"
]
}

To deploy Anagram service API into Kubernetes, just run:

$ curl https://raw.githubusercontent.com/arunk16/anagram-moj/master/k8s-application.yaml | kubectl create -f -

Re run of Kubectl get pods & Kubectl get services will show newly deployed service and pod for Anagram service.

We will just need to create route in Kong ingress controller to expose this service to outside world which is explained in next section.

4. Kong Ingress resource creation for APIs

We need to create a ingress route for the APIs for Kong to route the requests to the right API service when requests reaches the gateway.

4.1 Ingress resource for Echo Service API

It’s again a simple command utilising Kubectl command line tool .

$  echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-bar
spec:
rules:
- host: foo.bar
http:
paths:
- path: /
backend:
serviceName: http-svc
servicePort: 80
" | kubectl create -f -

We can access the echo service by finding out the Kong public api access IP address and port with a restricted request header ‘Host:foo.bar’ which is used as Host identifier in the above command.

$  export PROXY_IP=$(./minikube   service -n kong kong-proxy --url --format "{{ .IP }}" | head -1)
$ export HTTP_PORT=$(./minikube service -n kong kong-proxy --url --format "{{ .Port }}" | head -1)
$ curl -vvvv $PROXY_IP:$HTTP_PORT -H "Host: foo.bar"
* STATE: INIT => CONNECT handle 0x465f160; line 1404 (connection #-5000)
* Rebuilt URL to: 192.168.99.100:32130/
* Added connection 0. The cache now contains 1 members
* Trying 192.168.99.100...
* TCP_NODELAY set
* STATE: CONNECT => WAITCONNECT handle 0x465f160; line 1456 (connection #0)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 192.168.99.100 (192.168.99.100) port 32130 (#0)
* STATE: WAITCONNECT => SENDPROTOCONNECT handle 0x465f160; line 1573 (connection #0)
* Marked for [keep alive]: HTTP default
* STATE: SENDPROTOCONNECT => DO handle 0x465f160; line 1591 (connection #0)
> GET / HTTP/1.1
> Host: foo.bar
> User-Agent: curl/7.59.0
> Accept: */*
>
* STATE: DO => DO_DONE handle 0x465f160; line 1670 (connection #0)
* STATE: DO_DONE => WAITPERFORM handle 0x465f160; line 1795 (connection #0)
* STATE: WAITPERFORM => PERFORM handle 0x465f160; line 1811 (connection #0)
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Date: Thu, 17 May 2018 14:55:37 GMT
* Server echoserver is not blacklisted
< Server: echoserver
< X-Kong-Upstream-Latency: 3
< X-Kong-Proxy-Latency: 4
< Via: kong/0.13.1
<
{ [627 bytes data]
* STATE: PERFORM => DONE handle 0x465f160; line 1980 (connection #0)
* multi_done
100 615 0 615 0 0 41000 0 --:--:-- --:--:-- --:--:-- 41000
Hostname: http-svc-7dd9588c5-tstsrPod Information:
node name: minikube
pod name: http-svc-7dd9588c5-tstsr
pod namespace: default
pod IP: 172.17.0.8
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=172.17.0.6
method=GET
real path=/
query=
request_version=1.1
request_uri=http://172.17.0.8:8080/
Request Headers:
accept=*/*
connection=keep-alive
host=172.17.0.8:8080
user-agent=curl/7.59.0
x-forwarded-for=172.17.0.1
x-forwarded-host=foo.bar
x-forwarded-port=8000
x-forwarded-proto=http
x-real-ip=172.17.0.1
Request Body:
-no body in request-
* Connection #0 to host 192.168.99.100 left intact
* Expire cleared

4.2 Ingress resource for Anagram Service API

Lets create an Ingress resource for Anagram Api in similar way as Echo service.

$  echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: anagram.api
spec:
rules:
- host: anagram.api
http:
paths:
- path: /
backend:
serviceName: anagram-svc
servicePort: 80
" | kubectl create -f -

Interacting with Anagram service API running in our Kubernetes cluster.

$ curl -s 192.168.99.100:32130/castle,car,boat -H "Host: anagram.api" |json_pp
{
"castle" : [
"castle",
"cleats",
"lacets",
"sclate"
],
"boat" : [
"boat",
"bota"
],
"car" : [
"arc",
"car"
]
}
$ curl -s 192.168.99.100:32130/pictures -H "Host: anagram.api" |json_pp
{
"pictures" : [
"crepitus",
"cuprites",
"pictures",
"piecrust"
]
}

4.3 Access APIs via Kong in AWS Kubernetes

So accessing APIs via Kong api gateway is really easy from minikube cluster from local machine but we need to set some firewall rules in AWS Kubernetes cluster ELB(Elastic LoadBalancer) and security groups setup during the installation by kops tool.

We can access Kubernestes api from this load balancer URL — https://api-k8slab-k8s-local-xxxxxxxxxxxxxx.eu-west-1.elb.amazonaws.com, We will need credentials to access this and here is how we can obtain the password for the user admin.

[ec2-user@ip-xxxxx ~]$ kubectl cluster-info
Kubernetes master is running at https://api-k8slab-k8s-local-xxxxxxxxxxxx.eu-west-1.elb.amazonaws.com
KubeDNS is running at https://api-k8slab-k8s-local-xxxxxxxxxxx.eu-west-1.elb.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.# to get admin password[ec2-user@ip-xxxx ~]$ kops get secrets admin --type secret -oplaintext
YK8K3amTT.....

Two more steps to do before accessing the APIs via Kong api gateway.

1. Open up the ports of Kong admin and proxy APIs to be accessible from Kubernetes API load balancer similar to the image below. We can expose the Kong admin and Kong proxy APIs to new ELB load balancers altogether with Kubectl expose command to access those apis without providing any port numbers in the URL but it is still very expensive even though we don’t need any other application load balancers.

2. Allow inbound connections from Kubernetes API security group to Node ports where Kong Admin and Proxy APIs are exposed in master node security group. shown below.

Now We can access the Kong admin and proxy apis from these URLs below.

Kong Admin API — http://api-k8slab-k8s-local-xxxxxxxxxxxxxx.eu-west-1.elb.amazonaws.com

Kong Proxy API — http://api-k8slab-k8s-local-xxxxxxxxxxxxxxxx.eu-west-1.elb.amazonaws.com:8000

Lets access the echo service and anagram service APIs via Kong proxy Api.

#Anagram api response[ec2-user@ip-172-31-46-44 ~]$ curl -s -H "Host: anagram.api" http://api-k8slab-k8s-local-xxxxxxxxxxxx.eu-west-1.elb.amazonaws.com:8000/elsa | jq
{
"elsa": [
"ales",
"lase",
"leas",
"sale",
"seal"
]
}
#Echo service api response[ec2-user@ip-172-31-46-44 ~]$ curl -s -H "Host: foo.bar" http://api-k8slab-k8s-local-xxxxxxxxxxxxxxxxxx.eu-west-1.elb.amazonaws.com:8000
Hostname: http-svc-7dd9588c5-nmxzmPod Information:
node name: ip-172-31-70-53.eu-west-1.compute.internal
pod name: http-svc-7dd9588c5-nmxzm
pod namespace: default
pod IP: 100.96.1.4
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=100.96.2.5
method=GET
real path=/
query=
request_version=1.1
request_uri=http://100.96.1.4:8080/
Request Headers:
accept=*/*
connection=keep-alive
host=100.96.1.4:8080
user-agent=curl/7.53.1
x-forwarded-for=172.31.63.75
x-forwarded-host=foo.bar
x-forwarded-port=8000
x-forwarded-proto=http
x-real-ip=172.31.63.75
Request Body:
-no body in request-

5. Kong Plugins

Kong plugins can be added to Kong by following command and Kubernetes services served via Kong can be patched to take advantage of that. This is where Kong is more powerful than any other Ingress controllers available for Kubernetes.

$ echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: add-ratelimiting-to-route
config:
hour: 100
limit_by: ip
second: 10
" | kubectl create -f -
kongplugin.configuration.konghq.com "add-ratelimiting-to-route" created
$ kubectl get kongplugins
NAME AGE
add-ratelimiting-to-route 3m
# patch a service with Kong rate limiting kubectl patch svc http-svc \
> -p '{"metadata":{"annotations":{"rate-limiting.plugin.konghq.com": "add-ratelimiting-to-route\n"}}}'
service "http-svc" patched
# Analyse the response headers of echo service now$ curl -vvv -H "Host: foo.bar" http://api-k8slab-k8s-local-xxxxxxxxxxxxxxxxx.eu-west-1.elb.amazonaws.com:8000
* Rebuilt URL to: http://api-k8slab-k8s-local-xxxxxxxxxxxxx.eu-west-1.elb.amazonaws.com:8000/
* Trying 54.76.52.3...
* TCP_NODELAY set
* Connected to api-k8slab-k8s-local-xxxxxxxxxxxx.eu-west-1.elb.amazonaws.com (54.76.52.3) port 8000 (#0)
> GET / HTTP/1.1
> Host: foo.bar
> User-Agent: curl/7.53.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-RateLimit-Limit-hour: 100
< X-RateLimit-Remaining-hour: 96
< X-RateLimit-Limit-second: 10
< X-RateLimit-Remaining-second: 9

< Date: Thu, 31 May 2018 12:54:37 GMT
< Server: echoserver
< X-Kong-Upstream-Latency: 1
< X-Kong-Proxy-Latency: 3
< Via: kong/0.13.1
<
Hostname: http-svc-7dd9588c5-xfbvvPod Information:
node name: ip-172-31-70-53.eu-west-1.compute.internal
pod name: http-svc-7dd9588c5-xfbvv
pod namespace: default
pod IP: 100.96.1.5
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=100.96.2.5
method=GET
real path=/
query=
request_version=1.1
request_uri=http://100.96.1.5:8080/
Request Headers:
accept=*/*
connection=keep-alive
host=100.96.1.5:8080
user-agent=curl/7.53.1
x-forwarded-for=172.31.63.75
x-forwarded-host=foo.bar
x-forwarded-port=8000
x-forwarded-proto=http
x-real-ip=172.31.63.75
Request Body:
-no body in request-

6. Conclusion

This is a result of a POC I tried myself and I was not getting enough help online when I ran into few issues so sharing this here and this may not be suitable for your production needs, for example Kong proxy and admin APIs are accessed via less secure http protocol rather than https protocol. Please feel free to comment or ask any questions you may have on this. Thanks for reading this far ;-)

--

--

Arun S

Open source technologist specialising in Digital and Cloud platforms creation and migration.