K8s Network Policies Demystified and Simplified
A very important part of Kubernetes is how we are able to apply network rules in a dynamic way through the use of labels and selectors. Today I wanted to share with you how to easily implement these policies into your deployment.
It’s easy to be lackadaisical about hardening our clusters but It's crucial to the survivability of the application. Understanding a NetworkPolicy is an amazing tool and its easy to use if you follow these tips ;)
When I think of a Firewall rule I think of three things:
Who, What, and Why.
Who:
Let's define what are the resources we are trying to protect or expose.
What:
Decide what needs to be done. This can be blocking or allowing traffic on a port.
Why:
What is the reason for this rule? Understanding the purpose to a rule is more important than implementing it mindlessly.
Scenario
Our team wants to allow and deny traffic from parts of our corporate network and from our Pod resources inside our cluster. We have a collection of pods that require some ingress and egress traffic rules. Isolate pods in the default
namespace with label role=db.
We want to accept traffic on TCP port 6379
from cidr 172.17.0.0/16
with an exception of 172.17.1.0/24
. We want to accept traffic from all pods belonging to the namespace project=frontend
. For additional granularity, we will also allow traffic from pods with a label role=backend
. We want to allow egress connections on the default
namespace with the label role=db
to cidr 10.0.0.0/24
on TCP
port 5978
Did you get all that? It's complicated when you read a word problem like this. If we break it down by who, what, and why, this becomes an easier problem to deal with.
Who
Pods in the default
namespace and labels of role=frontend
for both ingress and egress traffic.
What
Ingress — Any Pod in a Namespace label of project=myproject
Ingress — Any Pod in the default
namespace with label role=frontend
Ingress — IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24)
Egress — Allow connections from any pod in the default
namespace with label role=db
to CIDR 10.0.0.0/27
TCP
port 5978
Why
- We need to allow pods labeled
role=frontend
to make connections with our backend database. - We need pods that are labeled
role=frontend
to have the ability to connect to each other. - We want to allow connections from specific CIDR blocks while making an exception for one block of addresses.
Boom!
Now we have a better understanding of what we need to define in a manifest.
Let's get into the nitty-gritty!
Let us familiarize ourselves with our API specifications.
kubectl explain networkpolicies
kubectl explain networkpolicies.spec. --recursive
kubectl explain networkpolicies.spec.ingress
kubectl explain networkpolicies.spec.egress
Making a habit of checking these will make you a better Kubernetes developer ;)
Starting from the top to the bottom….
Who: Define what pods we want this rule to apply to:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
What: Define what types of policies we will use:
policyTypes:
- Ingress
- Egress
Define our ingress rules
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
….. how does this manifest piece match up with our API specs?
kubectl explain networkpolicies.spec.ingress --recursive
***
FIELDS:
from <[]Object>
ipBlock <Object>
cidr <string>
except <[]string>
namespaceSelector <Object>
matchExpressions <[]Object>
key <string>
operator <string>
values <[]string>
matchLabels <map[string]string>
podSelector <Object>
matchExpressions <[]Object>
key <string>
operator <string>
values <[]string>
matchLabels <map[string]string>
ports <[]Object>
port <string>
protocol <string>
Notice that we will define a from
object and a port
object. Within these objects lies the opportunity to define the specific resources you want to declare.
A similar output is shown when you review our egress API Spec. Notice that here we need object to
and ports
as well.
kubectl explain networkpolicies.spec.egress --recursive
***FIELDS:
ports <[]Object>
port <string>
protocol <string>
to <[]Object>
ipBlock <Object>
cidr <string>
except <[]string>
namespaceSelector <Object>
matchExpressions <[]Object>
key <string>
operator <string>
values <[]string>
matchLabels <map[string]string>
podSelector <Object>
matchExpressions <[]Object>
key <string>
operator <string>
values <[]string>
matchLabels <map[string]string>
Now we can tie it all together.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Now we can create NetworkPolcies with confidence!