Implementing WAF and mTLS on Kubernetes with Nginx ModSecurity

Ohad Senior
CloudZone
Published in
6 min readDec 1, 2020

Hey there,

In a recent project that included deploying microservices into AKS, our client had a number of specific requirements:

  1. Use Azure Kubernetes Service (AKS) as the platform for the application microservices.
  2. Integrate mTLS capabilities to authenticate clients approaching our client’s APIs.
  3. Use the client certificate to validate the client’s origin (in our case, a hospital).
  4. Protect the public-facing microservices with a web application firewall (WAF).

Challenge

Azure does provide WAF services, like Application Gateway and Front Door, but neither of them has mTLS capabilities.

Solution

To meet our client’s requirements, we had to search for a third-party solution that provides it all. We came up with the idea of using the tried-and-tested Kubernetes Ingress-NGINX Controller for the following reasons: as well as being an Ingress controller with all the advantages of the NGINX engine, it also supports mTLS (client requirement 3); with its opensource WAF ModSecurity add-on (client requirement 2), it provides OWASP CRS support; and we can add more rules if we want to.

For the validation requirement, we configured the NGINX to transfer the client certificate fingerprint to the backend app.

Solution Demo:

If you want to follow this demo, you will need the following:

  1. Azure account (https://azure.microsoft.com/en-us/free/).
  2. Your own domain.
  3. DNS to manage your DNS records (I manage mine with Azure’s DNS service).

Let’s start by installing AKS (client requirement 1) with Pulumi, a rising star in the IaC world which uses familiar language code, like Python, Typescript, GO, and more.

  • Install Pulumi (in my case MacOS)
$ brew install pulumi
  • Install Python 3.6 or above and verify PIP is installed
  • Next, we will create a new Pulumi project
$ mkdir quickstart && cd quickstart 
$ pulumi new azure-python
  • The command will launch a short configuration section for a new Pulumi project.
Enter a value or leave blank to accept the (default), and press <ENTER>.
Press ^C at any time to quit.
project name: (pulum) aks-nginx-waf
project description: (A minimal Azure Python Pulumi program)
Created project ‘aks-nginx-waf’
Please enter your desired stack name.
To create a stack in an organization, use the format <org-name>/<stack-name> (e.g. `acmecorp/dev`).
stack name: (dev)
Created stack ‘dev’
azure:environment: The Azure environment to use (`public`, `usgovernment`, `german`, `china`): (public)
azure:location: The Azure location to use: (WestUS) westeurope
Saved config
  • Now we have three main files
  1. Pulumi.yaml defines the project
  2. Pulumi.dev.yaml contains configuration values for the stack we initialized.
  3. __main__.py is the Pulumi program that defines our stack resources
  • Next, we will deploy AKS
$ pulumi up
  • The next step is to connect to our AKS
$ az aks get-credentials -g <your resource group name> -n <AKS cluster name> — admin
  • Now, we create two namespaces - one for the application, and one for the Ingress-NGINX
$ kubectl create ns cldze
$ kubectl create ns ingress
  • Next, we have to create self-signed certificates for the client verification -mTLS (client requirement 3). (See this blog for more information about how to do this)
$ openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj ‘/CN=cldze’ 
$ openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj ‘/CN=cldze.info’
openssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
$ openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj ‘/CN=client’
openssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt
  • For the server authentication, I created a Let’s Encrypt certificate with Certbot. (See this blog for more information about how to do this)
$ sudo certbot certonly \
— manual \
— preferred-challenges=dns \
— email user@cldze.info \ # Replace suffix with your own domain
— server https://acme-v02.api.letsencrypt.org/directory \
— agree-tos \
-d “*.cldze.info” # Replace with your own domain

The result of this command is two pem files privkey.pem and fullchain.pem.

  • Now, after completing all the certificate creation steps, we can deploy secrets into the AKS app namespace, for the Ingress deployment.
# Server Authentication (using Letsencrypt)
$ sudo kubectl create secret tls ingress — key privkey.pem — cert fullchain.pem -ncldze
# Client Authentication (using the self-signed)
$ kubectl create secret generic ca-secret — from-file=ca.crt=ca.crt -ncldze
  • Now for the Ingress-NGINX deployment. I use Helm 3 to deploy Ingress-NGINX into the cluster, with the following value file:
# To enable modsecurity and owasp
enable-modsecurity: “true”
enable-owasp-modsecurity-crs: “true”

(See here for an explanation of the different ModSecurity configuration option s)

  • Now that we have the Ingress-NGINX Controller up and running, let’s deploy our app into AKS. The app will just echo back to us with HTTP request properties. We also deploy the service and Ingress in the same yaml stack. Each of the ModSecurity and configuration annotation snippets will demonstrate one of the capabilities that our use case needs.

Now we can check if our Ingress configuration answers the client’s requirements.

Client requirement 1: We are using AKS as our application platform.

Client requirements 2 and 3: Enforcing mTLS and forwarding the client fingerprint to the client microservice for another validation phase.

Let’s try to curl the website without client certificate and key certificate - we get an error message: “400 Bad Request”.

curl — http2 https://echo.cldze.info
<html>
<head><title>400 No required SSL certificate was sent</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>No required SSL certificate was sent</center>
<hr><center>nginx</center>
</body>
</html>

And if we try with client and key certificate we get “200”.

curl — http2 https://echo.cldze.info — cert client.crt — key client.key
{
“path”: “/”,
“headers”: {
“host”: “echo.cldze.info”,
“ssl-client-verify”: “SUCCESS”,
“ssl-client-subject-dn”: “CN=client”,
“ssl-client-issuer-dn”: “CN=cldze”,
“x-request-id”: “82513020001a3ebd584c7b5a0c2767df”,
“x-real-ip”: “34.242.209.15”,
“x-forwarded-for”: “34.242.209.15”,
“x-forwarded-host”: “echo.cldze.info”,
“x-forwarded-port”: “443”,
“x-forwarded-proto”: “https”,
“x-scheme”: “https”,
“x-client-fingerprint”: “2e36447d771d246111d52337b2fbcd5a4a3568aa”,
“user-agent”: “curl/7.61.1”,
“accept”: “*/*”
},
“method”: “GET”,
“body”: “”,
“fresh”: false,
“hostname”: “echo.cldze.info”,
“ip”: “34.242.209.15”,
“ips”: [
“34.242.209.15”
],
“protocol”: “https”,
“query”: {},
“subdomains”: [
“echo”
],
“xhr”: false,
“os”: {
“hostname”: “http-https-echo-deployment-5c969b8dcf-nvmmt”
},
“connection”: {}

We can also see that we have forwarded the client certificate fingerprint.

“x-client-fingerprint”: “2e36447d771d246111d52337b2fbcd5a4a3568aa”,openssl x509 -noout -in client.crt -fingerprint -noout
SHA1 Fingerprint=2E:36:44:7D:77:1D:24:61:11:D5:23:37:B2:FB:CD:5A:4A:35:68:AA

Now, let’s turn to client requirement 4: ModSecurity as WAF.

We have created two rules and SecRuleEngine is ‘On’. This means that ModSecurity will be in prevention mode.

Let's check the IP rule :

SecRule REMOTE_ADDR “@ipMatch 34.242.209.15” “log,deny,id:161,status:403,msg:\’Non IP address\’”

I get “ 403 Forbidden” curling from IP “34.242.209.15”

$ curl — http2 https://echo.cldze.info — cert client.crt — key client.key
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx</center>
</body>
</html>
$ curl checkip.amazonaws.com
34.242.209.15

let's check the logs to see the error:

$ kubectl logs po/<ingress pod name> -ningress we get the modsecurity log2020/11/01 15:03:35 [error] 15446#15446: *980749 [client 34.242.209.15] ModSecurity: Access denied with code 403 (phase 1). Matched “Operator `IpMatch’ with parameter `34.242.209.15' against variable `REMOTE_ADDR’ (Value: `34.242.209.15' ) [file “<<reference missing or not informed>>”] [line “4”] [id “161”] [rev “”] [msg “Non IP address”] [data “”] [severity “0”] [ver “”] [maturity “0”] [accuracy “0”] [hostname “10.0.0.210”] [uri “/”] [unique_id “160424301560.131541”] [ref “v0,13”], client: 34.242.209.15, server: echo.cldze.info, request: “GET / HTTP/2.0”, host: “echo.cldze.info”

Now, let’s check the request header rule: as user: admin I get blocked; changing to user: user, I pass.

$ curl — http2 https://echo.cldze.info -H “user-agent: admin” — cert client.crt — key client.key<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx</center>
</body>
</html>
$ curl — http2 https://echo.cldze.info -H “user-agent: user” — cert client.crt — key client.key
{
“path”: “/”,
“headers”: {
“host”: “echo.cldze.info”,
“ssl-client-verify”: “SUCCESS”,
“ssl-client-subject-dn”: “CN=client”,
“ssl-client-issuer-dn”: “CN=cldze”,
“x-request-id”: “37cb4f2120e416ca317ffe674d53a0dd”,
“x-real-ip”: “34.242.209.15”,
“x-forwarded-for”: “34.242.209.15”,
“x-forwarded-host”: “echo.cldze.info”,
“x-forwarded-port”: “443”,
“x-forwarded-proto”: “https”,
“x-scheme”: “https”,
“x-client-fingerprint”: “2e36447d771d246111d52337b2fbcd5a4a3568aa”,
“accept”: “*/*”,
“user-agent”: “user”

Conclusion :

Using ModSecurity and Ingress-NGINX, we have managed to resolve all our client’s demands that couldn’t be met by the Azure service alone. Ingress-NGINX and ModSecurity are two powerful tools that give us the agility and freedom to control Ingress traffic and protect our environment and we have the extra benefit of minimizing our vendor-lock.

--

--