How to Secure Kubernetes Using Network Policies

An illustrated guide to Kubernetes network policies

Gaurav Agarwal
Apr 22 · 9 min read
Image for post
Image for post
Photo by Kushagra Kevat on Unsplash

With more and more organisations adopting Kubernetes and running it in production, it is a necessity to understand the core of it and secure it appropriately. Kubernetes changes the traditional concept of running a separate virtual machine for every application and instead allows you to forget about the underlying infrastructure and just deploy pods in a cluster of generic nodes. That not only simplifies the architecture but also makes managing infrastructure easy.

Kubernetes is an open-source container orchestration platform, and the code base is available online on GitHub. While this enables contribution from the community, it also provides hackers with an opportunity to find loopholes and prepare for an attack. Most automation tools that set up Kubernetes cater to a variety of users, and therefore they do not enforce all security by default. You will have to make conscious attempts to apply the appropriate security policy in your Kubernetes cluster, and the creation of network policies is one of them.

Most organisations run a tiered architecture where they group applications according to the function they perform. The most common is the three-tier architecture. There is a web tier responsible for hosting the user interface and experience applications. The business tier hosts business APIs to perform functions (also known as the middleware), and the data tier runs back-end applications, like databases.

Kubernetes, by default, allows all pods to communicate amongst themselves, for simplicity. However, you can use network security policies and ingresses to enforce a tiered architecture within Kubernetes. In simple terms, network security policies in Kubernetes are analogous to firewalls. Organisations running a traditional architecture will often have firewalls to allow communication only between the required tiers or servers.

However, unlike firewalls, network policies work on the Layer-3 segmentation model and not on the Layer-7 model, which is more advanced and used by most of the modern firewalls and threat detection software. But having some control is better than not having any authority at all and therefore becomes a good starting point for security.

Separate Architecture Tiers into Multiple Namespaces

Let us assume that your organisation is running a three-tier architecture and you have web-based, middleware, and database applications running within your stack. You have containerised these applications and have decided to move to Kubernetes. The security team is not comfortable with the default settings, and they want you to apply a layered approach to replicate their existing architecture within Kubernetes.

The existing architecture separates servers into three logical zones, and a firewall defines communication between them. Below is how it works:

  • You don’t want a web application to interact with the database directly, although it can have outbound connectivity to the internet. That is the layer exposed to the external world.
  • The middleware should not have any outbound connectivity apart from the database layer, and it should only listen to requests from the web layer.
  • The database should not connect to the internet or any other application outside its layer and should allow connections only from the middleware layer.

In the Kubernetes world, you can zone your applications through namespaces. Below is how you can solve the problem:

  • Create a web namespace for web applications. There are no restrictions on this layer as this will be exposed to the external world and should interact with the internet.
  • Create a middleware namespace for the middleware applications. Create an ingress network policy that allows connections only from the web and middleware namespace, and an egress network policy that will enable links only to the middleware and database namespace.
  • Create a database namespace for the database applications. Create an ingress network policy to allow connections only from the middleware and database namespace, and an egress network policy that will enable links only to the database namespace.
Image for post
Image for post

You can enforce network policies at the level of the namespace layer and the pod layer. Although developers are not perfect, you don’t want to review every bit of software because it merely kills the productivity of your team. The smart way to balance this is by applying default network policies on the namespace rather than relying on developers to create it on the pod level for you. Of course, this can change with the security principles of your company, and you might want to do some more granular access control. That is possible as well with Kubernetes network policies.

You need to confirm that you authenticate and authorise users through RBAC and that developers can access only their team’s namespace. Think twice before granting any cluster-level roles to team members. You should reserve cluster-level roles only for cluster admins, and network and security teams.

Apply Network Policies

Applying network policies is very simple in Kubernetes. You just need to create network policy manifests and apply them on the cluster using kubectl commands. Ensure that you have the appropriate cluster level permissions. Also, check you that have the necessary network plugin that allows network policy to be installed on the Kubernetes cluster and that network policy is enabled. Check the official Kubernetes documentation for more details.

We will apply default network policies on the namespace level and implement the required rules to deliver the requirements. Let’s make a start.

Create namespaces

Create the web, middleware, and database namespaces and label them with tiers “web,” “middleware,” and “database,” respectively.

Create default network policies

The web namespace can allow all connections and does not need to have an egress rule as well, because some components of the web applications need to communicate with internet applications. Therefore we don’t need to apply any network policy on the web namespace.

We will start by creating a policy for the middleware namespace. For testing, I have configured port 80. You need to change that to your middleware port.

That becomes our default policy and therefore applies to all pods within the middleware namespace. Let us try to understand the YAML:

  • Like all Kubernetes manifests, the YAML file starts with an apiVersion. In this case, the apiVersion is networking.k8s.io/v1.
  • The Kind of Object we are creating is a NetworkPolicy.
  • The name of the network policy is middleware-network-policy in the middleware namespace.
  • The podSelector is {}, which implies all pods in the namespace.
  • The policyType signifies whether it is an Ingress or Egress policy. In our case, it is both.
  • The ingress section has a list of from declarations that define from where the traffic is allowed. In this case, it is from all namespaces that are labelled tier=web or tier=middleware on port 80.
  • The egress section has a list of to declarations that define allowed destinations of traffic. In this case, it is to all namespaces that are labelled tier=database or tier=middleware to port 80. Additionally, it also has a to declaration to allow egress traffic to UDP Port 53 for DNS resolution.

Let us now create a database network policy. For testing, I have configured port 80. You need to change that to your database port.

And that was it! You have successfully created default network policies for enforcing your organisation policy on Kubernetes.

Time for Some Testing

Let us create three NGINX deployments, one on each namespace, and see how they interact with each other.

$ kubectl create deployment nginx --image=ewoutp/docker-nginx-curl -n web
deployment.apps/nginx created
$ kubectl create deployment nginx --image=ewoutp/docker-nginx-curl -n middleware
deployment.apps/nginx created
$ kubectl create deployment nginx --image=ewoutp/docker-nginx-curl -n database
deployment.apps/nginx created
$ kubectl get deployment --all-namespaces|grep nginx
database nginx 1/1 1 1 20s
middleware nginx 1/1 1 1 33s
web nginx 1/1 1 1 65s

As we see, we created deployments on all three namespaces.

Let’s list the pods first to get their IP.

$ kubectl get pod --all-namespaces -o wide|grep nginx
database nginx-f67f7854c-k44gg 1/1 Running 0 65s 10.52.0.3 gke-cluster-3-default-pool-48567dd4-fmlf <none> <none>
middleware nginx-f67f7854c-5l2zx 1/1 Running 0 60s 10.52.0.4 gke-cluster-3-default-pool-48567dd4-fmlf <none> <none>
web nginx-f67f7854c-ldsbb 1/1 Running 0 69s 10.52.2.5 gke-cluster-3-default-pool-48567dd4-qgng <none> <none>

Now let’s kubectl exec on the web pod to check if it can connect with the middleware pod.

And yes! We can. Let us try connecting to the database pod from the middleware pod.

$ kubectl exec -it nginx-f67f7854c-ldsbb -n web -- curl 10.52.0.3
^Ccommand terminated with exit code 130

When we try to connect to the database pod from the web pod, it waits infinitely and times out. We expected this, as any pod from the web namespace should not communicate directly with the database namespace pods.

What would happen if we try to connect to the database pod from the middleware pod?

We get a reply as expected!

How about middleware to the web pod?

$ kubectl exec -it nginx-f67f7854c-5l2zx -n middleware -- curl 10.52.0.5
^Ccommand terminated with exit code 130

And as we expected, it times out as well.

Let us now try database to middleware.

$ kubectl exec -it nginx-f67f7854c-5l2zx -n middleware -- curl 10.52.0.4
^Ccommand terminated with exit code 130

And as we expected, it fails. What about the database to the web pod?

$ kubectl exec -it nginx-f67f7854c-5l2zx -n middleware -- curl 10.52.0.5
^Ccommand terminated with exit code 130

A timeout again. We expected that, as the database could only communicate with pods within its namespace.

Let’s check if intra-namespace communication is present or not. For that, we will fire another set of NGINX deployments called NGINX-1.

$ kubectl create deployment nginx-1 --image=ewoutp/docker-nginx-curl -n web
deployment.apps/nginx-1 created
$ kubectl create deployment nginx-1 --image=ewoutp/docker-nginx-curl -n middleware
deployment.apps/nginx-1 created
$ kubectl create deployment nginx-1 --image=ewoutp/docker-nginx-curl -n database
deployment.apps/nginx-1 created
$ kubectl get deployment --all-namespaces|grep nginx-1
database nginx-1 1/1 1 1 21s
middleware nginx-1 1/1 1 1 25s
web nginx-1 1/1 1 1 32s
$ kubectl get pod --all-namespaces -o wide|grep nginx-1
database nginx-1-cd6cf6cc7-xz8lf 1/1 Running 0 64s 10.52.0.6 gke-cluster-3-default-pool-48567dd4-fmlf <none> <none>
middleware nginx-1-cd6cf6cc7-27ztk 1/1 Running 0 68s 10.52.2.6 gke-cluster-3-default-pool-48567dd4-qgng <none> <none>
web nginx-1-cd6cf6cc7-r6nj4 1/1 Running 0 75s 10.52.0.5 gke-cluster-3-default-pool-48567dd4-fmlf <none> <none>

Cool, now let’s try NGINX to NGINX-1 on the web namespace.

And it works! What about middleware to middleware?

That works as well. And database to the database pod?

And as we expected, this works as well.

Let’s now expose the applications through services and see how they behave.

$ kubectl expose deployment nginx --port 80 -n web
service/nginx exposed
$ kubectl expose deployment nginx --port 80 -n middleware
service/nginx exposed
$ kubectl expose deployment nginx --port 80 -n database
service/nginx exposed
$ kubectl get svc --all-namespaces|grep nginx
database nginx ClusterIP 10.0.10.126 80/TCP 2m15s
middleware nginx ClusterIP 10.0.2.63 80/TCP 2m20s
web nginx ClusterIP 10.0.12.220 80/TCP 2m29s

Now let’s do a curl from web to middleware and check what happens.

From middleware to the database pod?

And it works as expected. What about curl from the web to the database pod with services?

$ kubectl exec -it nginx-f67f7854c-ldsbb -n web -- curl nginx.database
^Ccommand terminated with exit code 130

And as we expected, it doesn’t connect. At this point, we have successfully configured the network policies to enforce network isolation within the application tiers. Unless explicitly stated, by default, no developer would be able to violate the application architecture, and you can make sure developers do not have permissions to create network policies through RBAC.

Going the Extra Mile

Till now, we have created default network policies to establish that traffic moves in the right direction and network isolation is in place. However, we have just isolated the network based on namespaces. We can go even further by applying network policies on the pod level to ensure communication is even more granular. Only the right application pods can access the right middleware components on the right ports, and the right middleware components can connect to the correct database pods on the right ports.

Kubernetes network policies are flexible enough to provide this facility, and I will briefly cover the possibilities. Let us look at the network policy manifest structure for it.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
  • You can have multiple from blocks in your ingress and multiple to blocks in your egress.
  • You can match on an ipBlock, namespaceSelector, or a podSelector.
  • You can allow traffic only through specific protocol and ports. In the above example, traffic only from port 6379 and to port 5978 is allowed.
  • You can also exclude a particular part of the IP range using the except declaration. In the above example, traffic from 172.17.0.0/16 is allowed, except from the subnet 172.17.1.0/24, which is a part of the 172.17.0.0/16 range.

You are free to use any permutations and combinations to make the traffic as restrictive as possible.

Conclusion

Thanks for reading through! I hope you enjoyed the article. These are just some guidelines on how to enforce network policies within your Kubernetes cluster. Though you should implement it, you can make changes according to your team’s needs and security policies.

Better Programming

Advice for programmers.

Thanks to Zack Shapiro

Gaurav Agarwal

Written by

Certified Kubernetes Administrator | Cloud Architect | DevOps Enthusiast | Connect @ https://gauravdevops.com | https://freedevtools.net

Better Programming

Advice for programmers.

Gaurav Agarwal

Written by

Certified Kubernetes Administrator | Cloud Architect | DevOps Enthusiast | Connect @ https://gauravdevops.com | https://freedevtools.net

Better Programming

Advice for programmers.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store