Network Policy to Secure Workloads on Kubernetes Cluster

Secure your single/multi-tenant cluster using network policies

Md Shamim
Geek Culture
7 min readOct 7, 2022

--

Photo by Miłosz Klinowski on Unsplash

In Kubernetes cluster configurations, any service in any namespace is to be accessible. As a result, pods are open to all traffic by default.

We can define a network policy based on a namespace or pod to secure the workloads of the cluster. Suppose, in a multi-tenant cluster, separating the workloads based on the projects, teams, or organizations can be achieved by network policies.

Scenario

Suppose we have to build a 3-tier architecture using Kubernetes namespaces. Our application will be deployed in three layers. The three layers will be the frontend-tier, backend-tier, and database-tier.

● Frontend-tier will be public facing. Application within the frontend-tier will be exposed using a load-balancer service, which is why the frontend-tier will be accessed by the DNS or IP address of an external load balancer.

● Backend-tier will contain the workload related to application logic. This layer will contain all the application logic.

● Database-tier will contain workloads related to the database.

As we know by default, every namespace can send traffic to/from other namespaces. So, without applying any network policies, our 3-tier architecture will look similar to the following image :

Let’s configure our 3-tier architecture as demonstrated above. We will create three new namespaces. And then within the namespaces, we will deploy deployments and services.

For the sake of simplicity, we will use nginx image for deploying pods under the deployments.

1. Configure new namespaces

Create new namespaces and add labels to each namespace, so that network policies can be applied to the desired namespaces.

2. Deploy deployments and services

2.1 Database-tier

2.2 Backend-tier

2.3 Frontend-tier

We have exposed the frontend-tier using a load balancer service. Using the EXTERNAL-IP of the load balancer service, users will be able to access the application residing in the frontend-tier.

3. Security Risk

3.1 Issues

Currently, no network policies are applied to pods or namespaces. So, all the pods within the cluster can communicate with each other. But our Kubernetes cluster can be a multi-tenant cluster or might have other sensitive workloads running on the cluster. As an outcome, if somehow any attacker gets access to the frontend-tier can directly access the database-tier or any other namespaces within the cluster. Which is a huge security risk.

3.2 Remedy:

To overcome the security risk, we can use network policies. Using the appropriate network policies, we can isolate the frontend-tier, backend-tier, and database tier from other namespaces within the cluster. In addition to that, we can also restrict the ingress traffic of the database-tier to the backend-tier. And also, restrict the ingress traffic of a backend tier to a frontend-tier only. As an outcome, if somehow any attacker gets access to the frontend-tier, can’t directly access the database-tier or any other namespaces. See the following image for a better understanding :

4. Apply Network Policies

4.1 database-tier

● Default deny all ingress and all egress traffic
We can create a “default” policy for a namespace that prevents all ingress and egress traffic by creating the following Network Policy in that namespace.

After applying the above policy, the database-tier is now isolated from other namespaces within the cluster.

● Allow ingress from backend-tier
Now, we will apply another policy to the database-tier so that, it can only allow ingress traffic from the backend-tier.

4.2 backend-tier

● Default deny all ingress and all egress traffic
Similar to the database-tier , we will apply a network policy to isolate the backend-tier from other namespaces.

● Allow egress traffic to database-tier
In the database-tier, we have allowed ingress traffic from the backend-tier only. But in the previous step, we denied all traffic to/from backend-tier. Because of that, although the database-tier allows the ingress traffic from the backend-tier, the backend-tier is now unable to send egress traffic. To create successful communication between backend-tier and database-tier, we have to allow egress traffic from backend-tier to database-tier.

In addition, since we are using services to access pods, we have to allow another egress rule to resolve the DNS of the services. In the Kubernetes cluster, the DNS server runs as a set of pods in the kube-system namespace. So we have to allow egress traffic to the kube-system namespace. But we will not allow egress traffic on the whole kube-system namespace; we will allow pods containing the “kube-dns” level. With that, pods within the backend-tier can resolve the DNS of the services.

Use the following network policy to allow egress traffic from the backend-tier to the database-tier and also egress on port 53 of the kube-system namespace :

After completing the above step we are in the following stage; see the image :

As we can see from the above image, the backend-tier can send egress traffic to the database-tier. But currently, the backend-tier does not allow any ingress traffic.

● Allow ingress traffic from frontend-tier
We can apply a network policy to the backend-tier so that it can only allow ingress traffic from the frontend-tier.

With that, we are in a position where the backend-tier will allow ingress traffic from frontend-tier and egress traffic to database-tier.

4.3 frontend-tier

● Default deny all ingress and all egress traffic
Similar to the previous two-tier, we will apply a network policy to deny-all ingress and egress traffic from anywhere.

● Allow egress traffic to backend-tier
Since we have denied all ingress and egress traffic to/from the frontend-tier. For sending traffic to the backend-tier, we have to allow egress traffic to the backend-tier and also kube-system namespace to resolve DNS of the services as we discussed earlier.

After completing the above steps we are in the following stage; see the image :

● Allow ingress traffic from the Internet
Since the application in the frontend-tier is being exposed using the load balancer service. So, the request will come from the IP address of the external load balancer. As we are denying ingress traffic from everywhere, using the external IP of the load balancer, we will not be able to access our application resides in the frontend-tier. To solve this issue, we have to apply a network policy, which will allow ingress traffic from anywhere but not from the private IP addresses of the pods. By doing that we can restrict any traffic from the pods residing in the other namespaces.

With that, we have applied all the possible network policies needed to secure and isolate the Kubernetes workloads. See the following image for a better understanding of what we have done till now :

5. Test and Verification

Now let's test and verify, whether all network policies have been applied so far or are working properly or not.

● First of all, let's try to access our application using the external IP of the load balancer service.

In the above image, we can see that the application residing in the frontend-tier can be accessed by the external IP of the load balancer service.

● Let’s dive into a pod residing in the frontend-tier, and try to access services residing in the backend-tier and database-tier

● Similarly, we can also dive into pods residing in the backend-tier to observe whether the network policies are functioning or not.

●And finally, dive into a pod residing in the database-tier and observer how network policies are functioning:

In the above demonstration, we can see that the network policies applied to the corresponding namespaces are functioning properly.

If you found this article helpful, please don’t forget to hit the Clap and Follow buttons to help me write more articles like this.
Thank You 🖤

👉All Articles on Kubernetes —

All Articles on Kubernetes

23 stories

--

--

Md Shamim
Geek Culture

Cloud Infrastructure Engineer | AWS Community Builder | AWS | Kubernetes | GitHub Actions | Terraform | 👇👉 linkedin.com/in/shamimice03 github.com/shamimice03