Kubernetes Network Policies: Are They Really Useful?

Senthil Raja Chermapandian
8 min readSep 13, 2021

--

Both male and female toucans possess large, colorful bills. Their exact purpose isn’t clear, though they’re believed to play a role in the courthship ritual and in self-defense. While its size may deter predators, it is of little use in fighting them. The toco toucan can also regulate the flow of blood to its bill, allowing the bird to use it as a way to distribute heat away from its body. (Source: www.nationalgeographic.com)

The NSA and CISA recently published a Cybersecurity Technical Report for Kubernetes Security Hardening. It has a separate section on Network Separation and Hardening, which recommends the usage of Kubernetes Network Policies for restricting the network traffic between Pods.

Given that developing Applications as distributed micro services has more or less become the norm these days, and given that networking plays a foundational role for these micro services to communicate with each other, I spent some time understanding the recommendation in the report and how exactly Kubernetes Network Policies work. I’ve documented my learnings and findings in this blog. I’ve tried to provide my point of view on Kubernetes Network Policies.

Benefits of Network Isolation

By default, a Pod running in a Kubernetes cluster has the freedom to communicate with any other Pod in the cluster. This model reduces friction and overhead in developing and deploying Applications to the cluster. However, from a security perspective, every Pod being capable of communicating with every other Pod, can pose security issues. In case an attacker gains access to a cluster and has taken control of a Pod, the attacker is in a position to extend the attack into other running Pods. To prevent such intrusions from spreading to other Pods, it is recommended to whitelist the Pod communication i.e. implement a solution, that, by default prevents Pods from communicating with each other and if a Pod needs to legitimately communicate with another Pod, it has to be explicitly configured to allow the traffic.

Kubernetes NetworkPolicy API

Kubernetes provides the NetworkPolicy API to implement a Network isolation solution in the cluster. The NetworkPolicy API was first introduced as an alpha feature in v1.2 in March 2016. It transitioned to beta in v1.3 in July 2016. It took a long time for the API to stabilise and it graduated to stable v1 version in v1.7 in June 2017. The NetworkPolicy API is a powerful and well thought-out solution to implement network isolation in Kubernetes. It gives developers the ability to declaratively specify the isolation policies, without the need to learn or implement complex networking configurations that often vary from one platform to another. The network plugin abstracts the complex implementation details from the developers who only need to specify the desired rules; the rest is handled by Kubelet and the network plugin.

The API allows developers to specify policies that select a Pod or a group of Pods and configure ingress and egress rules for the selected Pod(s). The ingress and egress rules determine the sources from which the selected Pod(s) can receive traffic and the destinations to which the selected Pod(s) can send traffic. The NetworkPolicy API works at the Layer3 and Layer4 of the Pod network. The policies administered using the API are evaluated and enforced by the CNI Network Plugin used in the cluster. Not all network plugins support the NetworkPolicy API; only specific ones like Weave-net, Calico etc. support it. For instance, flannel doesn’t supports the NetworkPolicy API.

By default, no network policies are applied to Pods or namespaces, resulting in unrestricted ingress and egress traffic within the Pod network. Pods become isolated through a network policy that applies to the Pod or the Pod’s namespace. Once a Pod is selected in a network policy, it rejects any connections that are not specifically allowed by any applicable policy object. Pods are selected using the podSelector and/or the namespaceSelector options. Network policy formatting may differ depending on the container network interface (CNI) plugin used for the cluster. Administrators should use a default policy selecting all Pods to deny all ingress and egress traffic and ensure any unselected Pods are isolated. Additional policies could then relax these restrictions for permissible connections. External IP addresses can be used in ingress and egress policies using ipBlock, but different CNI plugins, cloud providers, or service implementations may affect the order of NetworkPolicy processing and the rewriting of addresses within the cluster.

Example Network Policy

Let me illustrate the working of NetworkPolicy API using a very simple example. Let’s assume the default namespace in a K8s cluster has two Pods: frontend and backend. Our goal is to allow only traffic from frontend Pod to backend Pod. The backend Pod should not be able to send traffic to the frontend Pod.

Create a default deny policy

The first step is to block all traffic in the default namespace. This is achieved by creating a default deny policy that selects all the pods (i.e. empty podSelector) in the default namespace and blocks all ingress and egress traffic. Note that the policy below allows DNS traffic from the Pods so that Pods can query the DNS service in the cluster to resolve service names. Also note that irrespective of the policy, a Pod is always capable of looping back the traffic to itself.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
egress:
- to:
ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP

Allow egress traffic from Frontend Pod to Backend Pod

The next step is to selectively allow only egress traffic from frontend Pod to backend Pod. The configuration below selects the frontend Pod using podSelector and specifies that only egress traffic to backend Pod (using podSelector) is allowed. Egress traffic to any other Pod is blocked.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend
namespace: default
spec:
podSelector:
matchLabels:
run: frontend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
run: backend

Allow ingress traffic from Frontend Pod to Backend Pod

The previous policy allows egress traffic from the frontend Pod to reach the backend Pod. However the backend Pod is still not configured to allow the ingress traffic coming from the frontend Pod. The following policy will allow the ingress traffic on the backend Pod

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend
namespace: default
spec:
podSelector:
matchLabels:
run: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
run: frontend

Network Policy Recipes

The Kubernetes documentation has some sample NetworkPolicy configurations that can be used as a great starting point. These can be modified and extended to fit your needs and use cases. There’s another invaluable resource when it comes to using NetworkPolicy in Kubernetes: ahmetb/kubernetes-network-policy-recipes. This GitHub Repo has a wealth of off-the-shelf NetworkPolicy configurations for a wide variety of use cases from the basic ones to more advanced ones.

Are they Really Useful?

Kubernetes Network Policies are really a cool thing. Having a stable API to declare and specify pod communication policies is highly beneficial. Developers are able to declaratively specify policies in YAML manifests so these policies can be installed along with the Application. These policies are applied in real time without the need to restart the running Pods. There are certain caveats to keep in mind before you decide to using Kubernetes Network Policies for your Applications.

A simple example of restricting communication from one Pod to another in a specific namespace resulted in creation of three network policies. Imagine if you were to do this for all of your Applications running in several namespaces. And imagine that your policies are more advanced than the simple one I illustrated. Things can soon get unwieldy. You need to pay special attention to clearly document your policies and perhaps device a method to keep track of which policy manifests belong to which policies. This can lead to inadvertent mistakes. Certain network plugins offer GUI based policy managers that help you define and visualise the policies in a cluster.

As your usage of Network policies grow, there is a need to perform sufficient end-to-end tests to make sure those policies do not break legitimate Pod to Pod communication. When an application is composed of many micro services developed by different teams, exactly knowing before-hand the Pod communication matrix could be challenging, if not impossible. So unless your end-to-end tests extensively cover all permutations and combinations, you run the risk of leaking bugs into production.

Having too many Network policies in a cluster has a slight performance impact on pod networking, since the networking plugin needs to evaluate and enforce policies before routing the packets to the destination. Depending on the plugin you use and the number and complexity of the policies, this could introduce a slight network lag in Pod to Pod communication. Certain use cases and applications cannot tolerate this lag.

Service Mesh solutions like Istio and Linkerd, offer almost similar functionality of Network Policies together with other features like encrypting the traffic between Pods, load balancing, rate-limiting etc. Such service mesh solutions might be more appealing and suitable for certain use cases.

Recent enhancements to Network Policy

The Special Interest Group (SIG) Networking created a separate NetworkPolicy subproject for unifying the way users think about Network Policies in Kubernetes, to avoid API fragmentation and unnecessary complexity. The NetworkPolicy project aims at providing a community where people can learn about, and contribute to, the Kubernetes NetworkPolicy API and the surrounding ecosystem. The team created a Validating Framework for Network Policies

Cyclonus

In order to solve the overall problem of testing ALL possible Network Policy implementations, the team decided to evolve the approach to Network Policy validation by creating Cyclonus. Cyclonus is a comprehensive Network Policy fuzzing tool which verifies a CNI provider against hundreds of different Network Policy scenarios. Cyclonus is easy to run on a cluster, and determine whether your specific CNI configuration is fully conformant to the hundreds of different Kubernetes Network Policy API constructs.

Port Range Policies

When writing a NetworkPolicy, you can target a range of ports instead of a single port. This is achievable with the usage of the endPort field, as the following example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: multi-port-egress
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 32000
endPort: 32768

The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and 32768.

Targeting a Namespace by its name

The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, provided that the NamespaceDefaultLabelName feature gate is enabled. The value of the label is the namespace name. While NetworkPolicy cannot target a namespace by its name with some object field, you can use the standardized label to target a specific namespace.

Conclusion

Kubernetes Network Policy can be a useful tool in restricting the communication between Pods in a cluster. However we need to evaluate the wider security context of the Application and carefully weigh the benefits and overheads that comes with using it.

👉 I tweet & blog regularly on Kubernetes and Cloud-Native Tech. Follow me on Twitter and Medium

👉 Check out kube-fledged, a kubernetes operator for creating and managing a cache of container images directly on the cluster worker nodes, so application pods start almost instantly

--

--

Senthil Raja Chermapandian

Principal SW Engineer @Ericsson | Architecting AI/ML Platforms | Cloud-Native Enthusiast | Maintainer: kube-fledged | Organizer: KCD Chennai