Configuring Ingress Controller on IBM Cloud Private
Learn how to configure access logging for your Node.js express applications
Overview
Skill Level: Beginner
Some basic understanding of Kubernetes and deploying resources
In this recipe, I demonstrate how to enable access logging in IBM Cloud Private using the built in Ingress. This recipe will teach you how to deploy an Ingress routing requests to a Node.js application based upon Express.
Ingredients
To get started, you should have an elementary understanding of Kubernetes and have installed IBM Cloud Private, Docker, and Node.js into your development environment. In this tutorial, I will demonstrate some basic concepts around creating a simple Node.js Express application and pairing the resource with an Ingress Controller. I will also show how to deploy these YAML files as Kubernetes Resources.
IBM Cloud Private Installation Guide: https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/installing/install_containers_CE.html
Docker Install: https://docs.docker.com/install/
Node.js Install:https://nodejs.org/en/download/
Step-by-step
To demonstrate the power of Ingress controllers, I want to create a basic Express application written using Node.js that is easy to write and explain for this article. In the application, I have defined two Express routes that return a message back to the user with a user provided payload appended to the end. These two routes will respond based upon the match of either `foo` or `helloworld`.
server.js
const express = require('express')
const app = express()
app.get('/helloworld/:id', function (request, response) {
response.send('Hello World! ' + request.params.id);
});
app.get('/foo/:id', function (request, response) {
response.send('Testing foo: ' + request.params.id);
});
app.listen(9080, function () {
console.log('Example app listening on port 9080!')
});
As this application has a dependency on Express, I will add this dependency to my package.json of my application. This will be used later on when packaging my Docker image and deploying to IBM Cloud Private.
package.json
{
"name": "nodejs",
"version": "1.0.0",
"main": "server.js",
"dependencies": {
"express": "^4.15.3"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node server.js"
},
"author": "",
"license": "ISC",
"description": ""
}
Pushing the application to Docker Hub
Now that we have created the script and done some local testing, we are now ready to package this and push the image to Docker Hub (or whatever registry you choose to leverage). To publish the image, we need to define the Dockerfile which we will use to build and publish the artifact.
The Dockerfile I am using is a standard Dockerfile that packages the depedencies and the main entry point into the application named server.js
Dockerfile
FROM node:latest
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY ./server.js /usr/src/app
EXPOSE 9080
CMD [ "npm", "start" ]
To package the container and publish the image, I wrote a simple script that refers to this Dockerfile.
deployAppDocker.sh
#!/bin/sh
set -e
export DOCKER_ID_USER="todkap"
docker login
docker build --no-cache=true -t todkap/node-ingress:v2 .
docker push todkap/node-ingress:v2
Once you set the appropriate execute permissions on this script, run the deployAppDocker.sh script from the same directory as your package.json, server.js and Dockerfile. The built image will be published as todkap/node-ingress:v2 and willl used later on in the article when we deploy to IBM Cloud Private. Once this script completes, we are ready to deploy this container to IBM Cloud Private but first we need to learn how to create a Kubernetes resource.
Deploying application to IBM Cloud Private
We are now at the point where we can deploy the resource to IBM Cloud Private. Since we want to deploy this as a Kubernetes resource, we need to first define the Deployment yaml file that will not only create the deployment artifact but also will create the service.
deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: node-app
labels:
app: node-app
version: v2
spec:
ports:
- port: 9080
name: http
selector:
app: node-app
version: v2
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: node-app
spec:
replicas: 1
template:
metadata:
labels:
app: node-app
version: v2
spec:
containers:
- image: todkap/node-ingress:v2
imagePullPolicy: IfNotPresent
name: node-app
ports:
- containerPort: 9080
---
Now that the we have defined the Service and Deployment types for this deployment, we can now run the command `kubectl apply -f deployment.yaml` that will deploy this application to IBM Cloud Private. Once the pods have listed as running (you can verify using the command `kubectl get pods` and look for the prefix node-app). Once the status is running, we are ready for the next step of configuring the Ingress.
Defining ingress rules
We are now ready to define our Ingress. Ingress is a great way to provide a single point of entry for your Kubernetes resources. Ingresses such as Nginx provide features such as SSL termination, URL rewriting and a host of other features that are found in many proxies in the market today. In this section, we will define a set of rules that will route our two Express routes from the Ingress to our application using our ingress.yaml.
ingress.yaml
##################################################################################################
# Ingress Routing
##################################################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: node-app
servicePort: 9080
- path: /helloworld
backend:
serviceName: node-app
servicePort: 9080
---
Now that the we have defined the ingress resource, we can now run the command `kubectl apply -f ingress.yaml` which will configure the ingress to route requests to our application.
To verify this is working, locate the proxy node of your deployment and find the ip address for this deployment. In the example I provided, I set PROXY to the proxy node of my IBM Cloud Private deployment. Once you have that value, you can curl the various REST APIs as follows…
todkapmcbookpro:ingress todd$ export PROXY=https://9.42.95.159
todkapmcbookpro:ingress todd$ curl $PROXY/foo/bar; echo
Testing foo: bar
todkapmcbookpro:ingress todd$ curl $PROXY/helloworld/todd; echo
Hello World!todd
todkapmcbookpro:ingress todd$
Enabling access logging
The final step to tie all of the pieces together is to enable access logging. By default, IBM Cloud Private has disabled access logging but provides a simple way to enable it by editing one of the IBM Cloud Private ConfigMap resources.
To enable access logging, edit the ConfigMap resource nginx-load-balancer-conf. To edit this resource, go to the command line and use the following command.
kubectl edit configmap --save-config nginx-load-balancer-conf --namespace=kube-system
In this resource, change the disable-access-log parameter to false.
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
body-size: "0"
disable-access-log: "false"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"body-size":"0","disable-access-log":"false"},"kind":"ConfigMap","metadata":{"annotations":{},"creationTimestamp":"2018-01-30T03:56:25Z","name":"nginx-load-balancer-conf","namespace":"kube-system","resourceVersion":"1629","selfLink":"/api/v1/namespaces/kube-system/configmaps/nginx-load-balancer-conf","uid":"8b257329-0571-11e8-bd1f-005056a0b243"}}
creationTimestamp: 2018-01-30T03:56:25Z
name: nginx-load-balancer-conf
namespace: kube-system
resourceVersion: "1068990"
selfLink: /api/v1/namespaces/kube-system/configmaps/nginx-load-balancer-conf
uid: 8b257329-0571-11e8-bd1f-005056a0b243
Once this is changed, save the file for the change to take effect.
Now that this change has been made, let’s repeat the same CURL commands we did previously.
todkapmcbookpro:ingress todd$ export PROXY=https://9.42.95.159
todkapmcbookpro:ingress todd$ curl $PROXY/foo/bar; echo
Testing foo: bar
todkapmcbookpro:ingress todd$ curl $PROXY/helloworld/todd; echo
Hello World!todd
todkapmcbookpro:ingress todd$
And now we can view the logs for the pod. In IBM Cloud Private, the pod name for the Ingress starts with nginx-ingress-lb. You can list the pods `kubectl get pods –namespace=kube-system` to find the list of pods.
Once I located my pod, I ran the following command `kubectl logs nginx-ingress-lb-amd64–2frlr –namespace=kube-system` and at the bottom of the logs, I see access logs interspersed with the various pod events.
9.27.101.237 - [9.27.101.237] - - [05/Feb/2018:23:07:11 +0000] "GET /foo/bar HTTP/1.1" 200 16 "-" "curl/7.54.0" 82 0.016 [default-node-app-9080] 10.1.85.199:9080 16 0.016 200
9.27.101.237 - [9.27.101.237] - - [05/Feb/2018:23:07:17 +0000] "GET /helloworld/todd HTTP/1.1" 200 16 "-" "curl/7.54.0" 90 0.006 [default-node-app-9080] 10.1.85.199:9080 16 0.006 200
9.27.101.237 - [9.27.101.237] - - [05/Feb/2018:23:08:22 +0000] "GET /foo/bar HTTP/1.1" 200 16 "-" "curl/7.54.0" 82 0.002 [default-node-app-9080] 10.1.85.199:9080 16 0.002 200
9.27.101.237 - [9.27.101.237] - - [05/Feb/2018:23:10:16 +0000] "GET /foo/bar HTTP/1.1" 200 16 "-" "curl/7.54.0" 82 0.002 [default-node-app-9080] 10.1.85.199:9080 16 0.002 200
9.27.101.237 - [9.27.101.237] - - [05/Feb/2018:23:10:24 +0000] "GET /helloworld/todd HTTP/1.1" 200 16 "-" "curl/7.54.0" 90 0.002 [default-node-app-9080] 10.1.85.199:9080 16 0.002 200
2018/02/05 23:11:19 [error] 5698#5698: *4370 access forbidden by rule, client: 9.42.95.159, server: _, request: "GET /nginx_status HTTP/1.1", host: "9.42.95.159:80"
9.42.95.159 - [9.42.95.159] - - [05/Feb/2018:23:11:19 +0000] "GET /server-status?auto HTTP/1.1" 404 21 "-" "Python-urllib/2.7" 135 0.001 [upstream-default-backend] 10.1.115.2:8080 21 0.001 404
W0205 23:16:26.889488 6 controller.go:972] error obtaining service endpoints: service kube-system/sas-api-svc does not exist
W0205 23:16:29.988070 6 controller.go:972] error obtaining service endpoints: service kube-system/sas-api-svc does not exist
2018/02/05 23:21:21 [error] 5697#5697: *4374 access forbidden by rule, client: 9.42.95.159, server: _, request: "GET /nginx_status HTTP/1.1", host: "9.42.95.159:80"
9.42.95.159 - [9.42.95.159] - - [05/Feb/2018:23:21:21 +0000] "GET /server-status?auto HTTP/1.1" 404 21 "-" "Python-urllib/2.7" 135 0.000 [upstream-default-backend] 10.1.115.2:8080 21 0.000 404w
We have now successfully demonstrated how to deploy our application to leverage the IBM Cloud Private Ingress and also to enable access logging.
Conclusion
Successful deployments of large scale applications require transparency and insight of application and their environments. By enabling access logging, we are able to quickly track the status of our deployments in relation to their HTTP endpoint health status and can easily integrate access logging with our overall logging and monitoring strategy. In this recipe, we showed how one can take a Node.js application, configure the Ingress to route to this application and ultimate start to track the traffic this application is handling in terms of HTTP requests. This simple but powerful scenario can easily be scaled up to many enterprise deployments we are seeing on IBM Cloud Private.
Bonus content
Shortly after this article was published, there was a set of users asking about the role of the annotation rewrite-target in many of the Nginx Ingress examples. Here is a quick example of the purpose and how it would work.
ingress.yaml
##################################################################################################
# Ingress Routing
##################################################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /banking
backend:
serviceName: node-app
servicePort: 9080
---
In the example above, I have defined the rewrite target of `/` and the path for the route to be `/banking`. This is very useful if you want to have specific URL patterns to route to existing applications without modifying the original application deployment and REST APIs.
I would then still add the path to my request as follows where the remainder of the path elements remain the same as what I demonstrated above.
todkapmcbookpro:ingress todd$ curl https://9.42.95.159/banking/foo/todd