Building Secure Kubernetes Environments, A Practical Guide to Network Policies

Jonathan De Jesus
6 min readJun 23, 2024

--

Requirements

  • Running Kubernetes cluster with CNI plugin installed
  • Kubectl installed to interact with a cluster
  • Understanding of basic Kubernetes concepts

Introduction

In this article we will explore Network Policies and how we can use them to control traffic flow and create isolation between services, we will do a briefly review of a couple of layers of the OSI model and we will see how we can make use of the power of a CNI (Container Network Interface) plugin to enforce some policies and rules. For the practice component of this article, I have configured a local cluster using K3S and Calico as the CNI plugin.

Diagram of what we are building in this article

For this practice we will create a production namespace containing two pods and a service that will serve as the entry point for one of them, one pod is an nginx pod, Nginx is a popular open-source server that can be used for different purposes, in our case it will have a basic configuration and we will be using it to send requests to, we will also create a curl pod that will run a curl image, Curl or cURL is a command line tool we can use to transfer data to and from servers, and we will use it to send requests to the nginx service, on the other hand we will create a develop namespace containing another curl pod, also running a curl image and it will also be used to send requests to the nginx service but from a different namespace.

Important Note: The ‘develop’ and ‘production’ namespaces used in the practical example are for illustrative purposes only. Please be cautious, especially in production environments, misconfigurations or improper application of policies can disrupt services and impact business operations.

Kubernetes Network Policies

In Kubernetes all ingress and egress traffic is allowed to and from pods, unless we have policies to control the traffic, all of our pods can communicate with each other in the different namespaces we might have, but, there could be a case in which we want to isolate all pods or some of them, to ensure only specific services can reach them, probably to prevent traffic that may affect the integrity of the production data, or to support regulatory compliance, or in case of having a bug in other environments we might want to protect our production from getting unexpected traffic to ensure only intended clients or users or authorized microservices can interact with our protected environment.

In Kubernetes we can add Network Policies to help us control the traffic that flows within our resources, they play a vital role by enforcing access control and different rules to create more secure environments where only approved connections are allowed.

Network Policies

If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), NetworkPolicies allow you to specify rules for traffic flow within your cluster, and also between Pods and the outside world. Your cluster must use a network plugin that supports NetworkPolicy enforcement. (https://kubernetes.io/docs/concepts/services-networking)

A network plugin allows Kubernetes to work with different network topologies, it helps create a working pod network and supports some other aspects of the Kubernetes network model.

Overview of the OSI Model

OSI Model, Layers 3 and 4

Note: For the network policies to work, a CNI plugin must have been installed in the Kubernetes cluster, otherwise the NetworkPolicy will have no effect, some popular CNI plugins are: Calico, Flannel, and Cilium.

Let’s see with a practical example how adding a NetworkPolicy to a namespace works.

Demo

To continue with our demo, let’s create two namespaces to emulate the environments we will be working with

# Create the 'production' namespace
kubectl create ns production

----------- OUTPUT -----------
namespace/production created

# Create the 'develop' namespace
kubectl create ns develop

----------- OUTPUT -----------
namespace/develop created

Now that our two namespaces are created, let’s start adding the components or objects necessary to illustrate how Network Policies work, let’s create a file containing the deployment for an nginx server, we will be using this server to send requests to and test how traffic flows across the the pods in the namespaces we are using

Now let’s create a service to reach our nginx server, a Kubernetes service is an abstraction that defines a set of pods and will help us have communicate with them even when pods are restarted and get new IP addresses, a service provides a reliable way to establish communication with pods and it also acts as a load balancer we can use to access our pods.

# Create nginx deployment in production namespace
kubectl apply -f nginx-deployment.yaml -n production

----------- OUTPUT -----------
deployment.apps/nginx-deployment created

# Create nginx service in production namespace
kubectl apply -f nginx-service.yaml -n production

----------- OUTPUT -----------
service/nginx created

Now let’s create a deployment that will create a pod running a curl image we will use to send requests to our newly created nginx server.

# Create curl deployment in production namespace
kubectl apply -f curl-deployment.yaml -n production

----------- OUTPUT -----------
deployment.apps/curl created

# Create curl deployment in develop namespace
kubectl apply -f curl-deployment.yaml -n develop

----------- OUTPUT -----------
deployment.apps/curl created

After our deployments are created we can start testing the communication between the pods, we will be using Curl and Netcat to do this (those tools are already included in the curl image we are running in the pod we just created)

Using Curl

# Send request from pod in production to another pod in production
kubectl exec -n production deploy/curl sh -- curl -I http://nginx

----------- OUTPUT -----------
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 612 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 200 OK
Server: nginx/1.14.2
...

# Send request from pod in develop to a pod in production
kubectl exec -n develop deploy/curl sh -- curl -I http://nginx.production

----------- OUTPUT -----------
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 612 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 200 OK
Server: nginx/1.14.2
...

Using Netcat

# Get nginx service ip
export NGINX_SERVICE_IP=$(kubectl get svc nginx -n production -o jsonpath='{.spec.clusterIP}')
# Verify the value of the NGINX_SERVICE_IP variable
echo $NGINX_SERVICE_IP
# This is an example of the IP of the service
----------- OUTPUT -----------
10.43.22.333


# Perform requests from production curl pod to production nginx pod
kubectl exec -n production deploy/curl sh -- nc -vz $NGINX_SERVICE_IP 80

----------- OUTPUT -----------
10.43.22.333 (10.43.22.333:80) open

# Perform requests from develop pod to production pod
kubectl exec -n develop deploy/curl sh -- nc -vz $NGINX_SERVICE_IP 80

----------- OUTPUT -----------
10.43.22.333 (10.43.22.333:80) open

Now let’s create a network policy that will prevent all incoming traffic from the other namespaces, this will be applied to all pods in the production namespace, for the sake of this demo we are affecting all pods in production, but we can set policies to select pods by labels and also we can decide which ports or addresses should be blocked or allowed (https://kubernetes.io/docs/concepts/services-networking/network-policies)

# Create network policy for production environment
kubectl apply -f production-networkpolicy.yaml

----------- OUTPUT -----------
networkpolicy.networking.k8s.io/default-deny-ingress created

After the network policy has been applied to the production namespace, we can perform some checks again to verify if the traffic from the develop namespace is being blocked or not

# Send request from pod in production to another pod in production
kubectl exec -n production deploy/curl sh -- curl -I http://nginx

----------- OUTPUT -----------
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 612 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 200 OK
Server: nginx/1.14.2
...

# Send request from pod in develop to a pod in production
kubectl exec -n develop deploy/curl sh -- curl -I http://nginx.production

----------- OUTPUT -----------
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to nginx.production port 80 after 2 ms: Couldn't connect to server
command terminated with exit code 7

Thanks for reading :)

--

--

Responses (1)