Introduction to Kubernetes Network Policy and Calico Based Network Policy
Introduction
Kubernetes Network Policies are a feature in Kubernetes to control the traffic flow of network in and out of the cluster. Just like any security layer, this one also has got it’s way of significance. This level of network traffic control is an application-centric construct which allow you to specify how a pod is allowed/supposed to communicate with various network entities(including other pods). Network Policies apply to a connection
with a pod on one or both ends, and are not relevant to other connections.
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
- Other pods that are running inside the Kubernetes cluster (exception: a pod cannot block access to itself)
- Namespaces
- IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
When defining a pod- or namespace- based Network Policy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector. Meanwhile, when IP based Network Policies are created, we define policies based on IP blocks (CIDR ranges).
Here, we are going to discuss about the default Kubernetes network policy, about how we can leverage it’s capabilities into a running Kubernetes cluster with an actual multi-container based web application and making it much more secure in terms of network flow. Also, we are going to deep dive into Calico which is a CNI(Container Networking Interface) and learn how we can write network policies based specifically on Calico.
So, what is Calico?
Calico is an open source networking and network security solution for containers (CNI), virtual machines, and native host-based workloads. Calico supports a broad range of platforms including Kubernetes, OpenShift, Mirantis Kubernetes Engine (MKE), OpenStack, and bare metal services. Container Network Interface (CNI) is a framework for dynamically configuring networking resources. It uses a group of libraries and specifications written in Go. The plugin specification defines an interface for configuring the network, provisioning IP addresses, and maintaining connectivity with multiple hosts.
In a Calico network policy, you create ingress and egress rules independently (egress, ingress, or both). You can specify whether policy applies to ingress, egress, or both using the types field.
Prerequisites
For the implementation of network policies, we need to make use of “network plugins” which are basically CNIs. Also, you would need to create a Kubernetes cluster using KubeADM.
If you have trouble setting up a Kubernetes cluster using Kubeadm and configure a CNI (the prefered CNI is Calico since that’s what we are using in this article.). View this article that clearly instructs how to do so. Click here
In this article, first we need to setup a working Kubernetes cluster (Refer the above article for instructions), then we need to install HELM and download a HELM artifact and run our test application in the cluster. After that, we are going in depth about what Network policies are and how we can make use of them.
Test Application Setup
After, creating a Kubernetes cluster using KubeADM in a Linux machine, we need to setup HELM in the virtual machine (master node server). Follow the instructions in the below link to install HELM: https://helm.sh/docs/intro/install/
After installing HELM, we need to download our test application HELM artifact from the below mentioned GitHub repository
Run the following commands.
$git clone https://github.com/arjunbnair97/trojanwall-helm-artifact.git$cd trojanwall-helm-artifact/$tar -xvf trojanwall-v2.tgz
After running the above steps, we need to run the test application in our Kubernetes cluster. But before that, we need to do some pre-work.
Run the following command to add a node selector “label” to the worker node.
$kubectl label node <worker node name> kubernetes.io/nodeType=CloudNodeNext, run the HELM execution command to spin up the application and database containers.
$helm upgrade --install trojanwall-v2 -n trojanwall trojanwall-v2 --create-namespaceThis should setup the test application. If nothing happens, delete the release by running: helm uninstall trojanwall-v2 -n trojanwall
Then re-run the HELM upgrade command.
To see the test application, go to the URL: http://public-ip-of-master-node:31000
Now lets get on with the network policy tutorials.
Network policies in Kubernetes (Default)
In a Kubernetes cluster, by default, all pods are non-isolated, meaning all ingress and egress traffic is allowed. Once a network policy is applied and has a matching selector, the pod becomes isolated, meaning the pod will reject all traffic that is not permitted by the aggregate of the network policies applied.
By default in a Kubernetes cluster using the ‘networking.k8s.io/v1’ API version and kind ‘NetworkPolicy’, we can write network policies. We can also write network policies using CNI based operators and controllers which we will cover in this article.
Each NetworkPolicy includes a policyTypes list which may include either Ingress, Egress, or both. The policyTypes field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no policyTypes are specified on a NetworkPolicy then by default Ingress will always be set and Egress will be set if the NetworkPolicy has any egress rules.
ingress: Each NetworkPolicy may include a list of allowed ingress rules. Each rule allows traffic which matches both the ‘from’ and ‘ports’ sections.
egress: Each NetworkPolicy may include a list of allowed egress rules. Each rule allows traffic which matches both the ‘to’ and ‘ports’ sections.
Deny Traffic to a Particular Network
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-custom-policy-app
namespace: trojanwall
spec:
podSelector:
matchLabels:
app: django
policyTypes:
- Egress
- Ingress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 192.168.163.0/24
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 192.168.163.0/24In the above example, the policy allows the pod to communicate to the external world(both ingress and egress). However it restrict traffic (both ingress and egress) flow to a particular network range.
A Scenario Based Network Policy
Within the test application that we setup in the Kubernetes cluster, we have two pods. One is a Python application pod and the second one is a PostgreSQL DB pod. The networking traffic requirement is that the DB pod should only allow both Ingress and Egress traffic from/to the app pod when using the port 5432 and the app pod should allow both Ingress and Egress traffic from/to the DB pod when using the port 8000.
Also, we need to allow access to the app pod from http from anywhere through port 8000. The below example illustrates the requirement. There’s three network policies defined below.
App Container Traffic flow
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: trojanwall-app-traffic-network-policy
namespace: trojanwall
spec:
podSelector:
matchLabels:
app: django
policyTypes:
- Egress
- Ingress
egress:
- to:
- podSelector:
matchLabels:
app: postgres-db
ports:
- protocol: TCP
port: 5432
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 8000
ingress:
- from:
- podSelector:
matchLabels:
app: postgres-db
ports:
- protocol: TCP
port: 5432
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 8000Here, we mention that the network policy should apply for any pod with the podSelector ‘app: django’.
We are mentioning that the egress traffic flow should be allowed to any pod with podSelector ‘app: postgres-db’ through port ‘5432’ and any external network through port ‘8000’.
As for the ingress traffic, we are mentioning that traffic from any pod with podSelector ‘app: postgres-db’ coming from port ‘5432’ should be allowed and also any traffic from the external world through port ‘8000’ should be allowed.
DB Container Traffic Flow
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: trojanwall-db-traffic-network-policy
namespace: trojanwall
spec:
podSelector:
matchLabels:
app: postgres-db
policyTypes:
- Egress
- Ingress
egress:
- to:
- podSelector:
matchLabels:
app: django
ports:
- protocol: TCP
port: 8000
ingress:
- from:
- podSelector:
matchLabels:
app: django
ports:
- protocol: TCP
port: 5432Here, we mention that the network policy should apply for any pod with the podSelector ‘app: postgres-db’.
We are mentioning that the egress traffic flow should be allowed for any pod with podSelector ‘app: django’ through port ‘8000’ only.
As for the ingress traffic, we are mentioning that traffic from any pod with podSelector ‘app: djangob’ through port ‘5432’ should be allowed.
Deny All Network Policy
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-network-policy
namespace: trojanwall
spec:
podSelector: {}
policyTypes:
- Egress
- IngressTo block every other traffic flow, we also need to add a ‘deny all policy’ for extra security. To see the list of created network policies, run the below command.
$kubectl get networkpolicy -ANetwork policies using Calico
Just like how we mentioned above, by using Calico as a CNI, we can leverage it’s power to write network policies in a Kubernetes cluster to manage traffic flow. Calico based network policies use the ‘projectcalico.org/v3’ apiVersion and kind as ‘NetworkPolicy’.
The Calico NetworkPolicy supports the following features:
- Policies can be applied to any kind of endpoint: pods/containers, VMs, and/or to host interfaces
- Policies can define rules that apply to ingress, egress, or both
- Policy rules support:
- Actions: allow, deny, log, pass
- Source and destination match criteria:
* Ports: numbered, ports in a range, and Kubernetes named ports
* Protocols: TCP, UDP, ICMP, SCTP, UDPlite, ICMPv6, protocol numbers (1–255)
* HTTP attributes (if using Istio service mesh)
* ICMP attributes
* IP version (IPv4, IPv6)
* IP or CIDR
* Endpoint selectors (using label expression to select pods, VMs, host interfaces, and/or network sets)
* Namespace selectors
* Service account selectors - Optional packet handling controls: disable connection tracking, apply before DNAT, apply to forwarded traffic and/or locally terminated traffic
Just like the older example with the Python app container and PostgreSQL DB container, we are going to do the same here with the Calico based network policies.
App Container Traffic flow
---
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: trojanwall-app-traffic-network-policy
namespace: trojanwall
spec:
order: 10
selector: app == 'django'
types:
- Egress
- Ingress
egress:
- action: Allow
protocol: TCP
source:
selector: app == 'django'
destination:
selector: app == 'postgres-db'
- action: Allow
protocol: TCP
source:
selector: app == 'django'
ports:
- 8000
destination:
nets:
- 0.0.0.0/0
ingress:
- action: Allow
protocol: TCP
source:
selector: app == 'postgres-db'
destination:
selector: app == 'django'
- action: Allow
protocol: TCP
source:
nets:
- 0.0.0.0/0
destination:
ports:
- 8000Here, we mention that the network policy should apply for any pod with the podSelector ‘app == ‘django’’. Also, we mention the order of priority to be ‘10’.
We are mentioning that the egress traffic flow should be allowed for TCP connection from any pod with podSelector ‘app: django’ to any pod with podSelector ‘app: postgres-db’ as destination. Also, we are adding one more rule to allow any TCP connection from any pod with podSelector ‘app: django’ to anywhere in the web through port ‘8000'.
As for the ingress traffic, we are mentioning that traffic from TCP connection is to be allowed from any pod with podSelector ‘app: postgres-db’ as source and to any pod with podSelector ‘app: django’ as destination. Also, we are adding one more rule to allow any TCP connection from anywhere in the web to port ‘8000’.
DB Container Traffic Flow
---
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: trojanwall-db-traffic-network-policy
namespace: trojanwall
spec:
order: 20
selector: app == 'postgres-db'
types:
- Egress
- Ingress
egress:
- action: Allow
protocol: TCP
source:
ports:
- 5432
destination:
selector: app == 'django'
ingress:
- action: Allow
protocol: TCP
source:
selector: app == 'django'
destination:
ports:
- 5432Here, we mention that the network policy should apply for any pod with the podSelector ‘app: postgres-db’. Also, we mention the order of priority to be ‘20’.
We are mentioning that the egress traffic flow should be allowed for TCP connection from port ‘5432’ as the source to any pod with podSelector ‘app: django’ only as the destination.
As for the ingress traffic, we are mentioning that traffic from TCP connection is to be allowed for any pod with podSelector ‘app: django’ as source and to port ‘5432’ as the destination.
Deny All Network Policy
---
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: default-deny
namespace: trojanwall
spec:
order: 30
selector: all()
types:
- Ingress
- EgressTo block every other traffic flow, we also add a deny all policy for extra security. To see the list of created network policies, run the below command.
Hint: In order to view the network policies created using Calico, you need to run the following command:
$kubectl get networkpolicy.projectcalico.org -AConclusion
As a form of proper and streamlined security measures we mostly use proxy configurations, firewalls, ACLs,etc. As an additional layer of security on the network layer, Kubernetes offers a finer grained rules via Network Policies.
Since by default all Pods are non-isolated and are able to communicate one another without restriction, it leaves a small gap for security risks. Network Policies are used to limit intra-cluster communication which means that in a well defined and large micro-services infrastructure, network policies enabled security on a pod-to-pod microscopic level. Network Policies are rules applied at OSI layer 3 or 4 to control the traffic flow between pods. Just like how network ACLs, network security groups in cloud, proxy servers, iptables behave, this feature in K8s also adds as a protection to your micro-services infrastructure.
Therefore it has come to picture that utilizing this capability of Kubernetes is much beneficial. Hope that this article proves to be useful if you come across any difficulties in understanding what network policies in Kubernetes are.
