Understanding Network Policies in Kubernetes

Kirill Goltsman
Supergiant.io
Published in
7 min readMay 8, 2019

As you remember from an earlier Supergiant tutorial, the Kubernetes networking model allows Pods and containers running on different nodes to easily communicate with each other.

Containers can access each other via localhost, and Pods can access other Pods using Service name or Fully Qualified Domain Name (FQDN) if they live in different namespaces. In both cases, kube-dns or any other DNS service deployed to your cluster will ensure that the DNS is properly resolved and Pods can access each other.

This flat networking model is great when you want all Pods to access all other Pods. However, there are scenarios where you want to limit access to certain Pods. For example, you may want to make some Pods “isolated” and to forbid any access to them or to limit access from some Pods or Services that are not expected to interact with a selected group of Pods. Kubernetes can help you achieve this with the NetworkPolicy resource. In what follows, we’ll show you how to define a NetworkPolicy to create “isolated” Pods or limit access to a certain group of Pods. Let’s get started!

Tutorial

To complete the examples used below, you’ll need the following prerequisites:

  • A running Kubernetes cluster. See Supergiant documentation for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube >=0.33.1.
  • A kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

NetworkPolicy is just an API resource that defines a set of rules for Pod access. However, to enable a network policy, we need a network plugin that supports it. We have a few options:

If you are running Minikube, Cilium is the simplest solution to test network policies. Let’s go ahead and deploy it to our local cluster.

Step 1: Deploy Cilium to Minikube

To deploy Cilium, you should have Minikube >=0.33.1 started with the following arguments:

minikube start --network-plugin=cni --memory=4096

After Minikube is started, we need to deploy the Cilium DaemonSet, Cilium RBAC and the necessary configuration for connecting to etcd instance deployed to Minikube.

First, find your Kubernetes version. It’s displayed in the console when you start Minikube:

minikube start --network-plugin=cni --memory=4096Starting local Kubernetes 1.13.3 cluster...

If you want a specific K8s version for running Minikube, use the --kubernetes-version flag with your preferred version.

Note: there are some issues when deploying Cilium with Kubernetes 1.8 and 1.9. See the details here.

Next, find the YAML file with the Cilium manifests for your Kubernetes version in the official Cilium Getting Started guide here. Finally, deploy Cililum to Minikube using the manifests for your version (e.g., we run Kubernetes 1.13.0):

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.4.0/examples/kubernetes/1.13/cilium-minikube.yamlconfigmap/cilium-config created
daemonset.apps/cilium created
clusterrolebinding.rbac.authorization.k8s.io/cilium created
clusterrole.rbac.authorization.k8s.io/cilium created
serviceaccount/cilium created

This should deploy Cilium to the kube-system namespace. To see this list of Cilium Pods, you can run:

kubectl get pods --namespace=kube-systemNAME                READY   STATUS    RESTARTS   AGEcilium-jf7f8         0/1     Running   0          65s

In the production multi-node environment, Cilium DaemonSet will place one Pod per node. Each Pod then will enforce network policies on the traffic using Berkeley Packet Filter (BPF). Also, note that for production use of Cilium you’ll need a key-value store (e.g etcd ). See the Cilium Kubernetes Integration Guide to learn more.

Step 2: Deploy Apache Web Server

Next, we need to deploy an app that we want to be managed by a Network Policy. We’re going to create a simple Apache HTTPD Deployment with two replicas. The manifest for this Deployment is pretty straightforward:

apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
labels:
app: httpd
spec:
replicas: 2
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80

Create the Deployment:

kubectl create -f httpd.yaml
deployment.apps/httpd-deployment created

The best way to access Pods in the Deployment is to expose them using a Service. Let’s do it with a simple one-liner like this:

kubectl expose deployment httpd-deployment --port=80
service/httpd-deployment exposed

Now let’s check if everything worked as we expected:

kubectl get svc,pod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/httpd-deployment ClusterIP 10.101.89.201 <none> 80/TCP 2m51s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18m

NAME READY STATUS RESTARTS AGE
pod/httpd-deployment-84d45fbd4-cxg7d 1/1 Running 0 6m39s
pod/httpd-deployment-84d45fbd4-vh8lz 1/1 Running 0 6m39s

Great! The Apache Deployment and the Service are ready to go, and we can apply a NetworkPolicy to them now.

Step 3: Define Network Policy

As you remember, all Pods in your cluster are non-isolated by default, which means they can be accessed by any other Pods. However, if we apply a NetworkPolicy to a particular Pod, that Pod will then reject all connections that are not allowed by that NetworkPolicy. We can define a Network Policy using the following spec:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-np
namespace: default
spec:
podSelector:
matchLabels:
app: httpd
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: dev
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 80
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978

This NetworkPolicy manages a group of Pods specified in the spec.podSelector field. Thus, all Pods with the label app=httpd (i.e., our Apache endpoints) will be selected by this NetworkPolicy. Please, note that if this field is left empty, a Network Policy will select all Pods in its namespace.

The next step is to specify a type of network policy applied to selected Pods. We can apply Ingress policy to control incoming traffic, Egress policy to control Egress traffic from selected Pods or both.

Ingress rules list traffic sources allowed to access a group of Pods specified in the spec.podSelector. These sources can be specified by Pod selector, IP range, and namespace selector. For example, Ingress podSelector matches a group of Pods that can access our Apache HTTPD Deployment by label. These Pods should run in the same namespace where the NetworkPolicy is enabled.

Also, we can allow access from all Pods living in a particular namespace by using the namespaceSelector field. If you specify both namespaceSelector and podSelector in a single array entry as in the example below, you will select particular Pods within particular namespaces. Please note that to enable this behavior, both namespaceSelector and podSelector should belong to a single array element. This works as if we are ANDing two source types.

...
ingress:
- from:
- namespaceSelector:
matchLabels:
user: john
podSelector:
matchLabels:
role: client
...

This example is different from the manifest above where namespaceSelector and podSelector are two separate match rules independent of each other. This works as ORing two traffic source types.

We can also use ipBlock field to allow Ingress or Egress traffic from or to particular IP CIDR ranges. Pod IPs are ephemeral, so these IPs should be cluster-external IPs. For more details about using IP ranges for Ingress and Egress, please consult this Kubernetes network policies doc.

Finally, we can specify the ports on which to allow the connection to our Apache Pods. In the manifest above, we allow connection on TCP port 80, the port on which our HTTPD server listens to connections.

Egress rules are very similar to Ingress rules except that they define the destination of the traffic. As in the case of Ingress, the Egress rules may be based on podSelector, namespaceSelector, and ipBlock.

So, let’s summarize what the NetworkPolicy above does. It allows connections to all the Pods with the label app=httpd (Apache web server) on TCP port:80 in the default namespace from:

  • Any Pod that has the label role=frontend.
  • Any Pod in a namespace with the label project=dev .

Our Egress rules allow connections from any Pod in the “default” namespace with the label app=httpd to the CIDR range 10.0.0.0/24 on TCP port:5978 .

Now that you understand how NetworkPolicy works, let’s go ahead and create it:

kubectl create -f npolicy.yml
networkpolicy.networking.k8s.io/test-np created

Step 4: Testing the Network Policy

Because we’ve already deployed Cilium, we can expect our NetworkPolicy to work. Let’s test it by creating a Pod with the label app:busybox, different from the one specified in the Ingress Pod selector. We’ll try to connect to Apache server from within the Busybox container using the wget command:

apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh","-c"]
args: ["wget --spider --timeout=1 httpd-deployment; sleep 3m"]

Save this manifest to busybox.yaml and create the Pod:

kubectl create -f busybox.yaml
pod/pod1 created

Let’s stream the Busybox logs to see what happens:

kubectl logs -f pod1
Connecting to httpd-deployment (10.101.89.201:80)
wget: download timed out

After some time, the request to the Service httpd-deployment timed out. That’s because the Busybox Pod’s label does not match the podSelector label in the Ingress rule of the Network Policy. Thus, the Pod is not allowed to access Pods in your Apache Web Server deployment.

Hmm! Let’s create another Pod with a different label:

apiVersion: v1
kind: Pod
metadata:
name: pod2
labels:
role: frontend
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh","-c"]
args: ["wget --spider --timeout=1 httpd-deployment; sleep 3m"]

As you see, this Pod has a label role:frontend that is allowed in our Ingress rules. Let’s create the Pod:

kubectl create -f pod2.yaml

Now, if you check the Pod’s logs you’ll find that wget command returned no errors. This means that this Pod has successfully connected to your Apache Service:

kubectl logs -f pod2
Connecting to httpd-deployment (10.111.90.217:80)

Conclusion

That’s it! In this tutorial, you learned how to use NetworkPolicy resource to control traffic and access to your Deployments and Pods. This feature is very useful when you want to limit access to some sensitive Pods from within and outside of the cluster.

Here we used Cilium as the network controller for the Network Policy, but you can try out other options such as Calico or Weave Net, among others. Also, check out our latest tutorial to learn how service meshes can be used for more advanced use cases of service-to-service communication in Kubernetes.

Originally published at https://supergiant.io.

--

--

Kirill Goltsman
Supergiant.io

I am a tech writer with the interest in cloud-native technologies and AI/ML