Nginx ingress and Cloud Armor

Ahmed Khalil Jerbi
3 min readDec 13, 2022

--

In the last few days our web application has been attacked using a method called dictionary attack which is a type of brute force attack where the hacker tries a pre-built list of possible passwords until he finds a match and gets access to confidential information. In a short amount of time the attacker succeeded in getting into several accounts. It’s clear our web application was not secure enough.

Blocking ip addresses was our first action but it seems to be not efficient at all as the bot changes ip addresses every time. So how do we stop this attack?

Multiple fixes were discussed in our team meetings such as blocking user accounts after three wrong attempts which will delay or may stop the attack and also using Recaptcha api. but these fixes have to be implemented in our application and tested which will take several days.

As our web application is deployed on GKE (Google Kubernetes Engine) and as we need to protect websites against these types of attacks we have to dig into google cloud platform services and google Cloud Armor seems to be the most appropriate one for this situation.

Google Cloud Armor is an enterprise-grade DDOS defense service and web application firewall. It also proposes predefined rules to help defend against attacks such as cross-site scripting (XSS) and SQL injection (SQLi) attacks.

As for configuring cloud Armor, which is easy to configure by the way, we were unable to use predefined rules for our web application. The website was exposed via an ingress which seems to be fine but it was an nginx ingress controller which generates a Google Cloud TCP Load balancer. In fact google cloud armor could be integrated only with google cloud http load balancer L7. To do so we have to transform our ingress with nginx ingress into http load balancer.

Fortunately all Nginx ingress annotations could be configured in the http load balancer like custom request headers, custom response headers, rewrite path and so on.

Our GKE pods will be the backend of our Load balancer and to configure this we have to work with network endpoint groups (NEG).

To create our network endpoint groups we add cloud.google.com/neg annotation to Kubernetes service in which we specify the neg name and pod’s port.

apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "my-neg"}}}'
spec:
selector:
k8s-app: my-app
ports:
- name: http-port
port: 80
targetPort: 8080
protocol: TCP
  • After adding annotation in service yaml the network endpoint group is created automatically
network endpoint group generated automatically after adding annotation in the service
Network Endpoint Group (Neg)
  • after that we have to configure the http load balancer with network endpoint group as backend, so in the backend type we have to choose as backend type Zonal network endpoint group
For GKE backends we have to pick Zonal network endpoint group as backend type for our LB
Google cloud http load balancer backend type

finally we pick the network endpoint groups created before

Google cloud http load balancer backend service

Creating http load balancer with zonal network endpoint groups backend service makes it possible for us to configure google cloud Armor with predefined rules.

--

--