The second part of my CKS series will cover how to enable Audit policy logging for a Kubernetes cluster deployed with Kubeadm. The reference article is here
Audit logging (also known as audit trail) refers to the practice of recording events and changes to a system; in the context of Kubernetes security, it’s a security feature of the API server that will record every (or a subset of your choice) actions and interactions of users with the cluster. As the API server is the only way to interact with the cluster, that makes it a primary target for attackers and naturally the most important service to monitor and secure.
You can configure what you’re going to log by creating an object of type Policy:
cat > /etc/kubernetes/audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
- level: Metadata
There are four available levels to log every single event/resource granularly:
None — don’t log events that match this rule.
Metadata — log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body.
Request — log event metadata and request body but not response body. This does not apply for non-resource requests.
RequestResponse — log event metadata, request and response bodies. This does not apply for non-resource requests.
It’s easier to refer to the docs to understand how to specify the level per resource audited; during the CKS exam, you’ll be able to access this document (just search for “Audit” in https://kubernetes.io/docs).
The file in itself is not going to do much until we instruct the API server to use it. In most clusters the API server is configured as a static pod whose configuration lives in
/etc/kubernetes/manifests/kube-apiserver.yaml ; not only we will need to modify the flags of the actual command that starts the API server, but we need to make sure that the server itself can find the policy file we just created, using the pair
Note also the options to configure the audit log file, retention (in days) and the maximum size of the audit log (in Mb).
Bonus: troubleshoot misconfigured API-server static pod
It might happen that due to a typo in editing the static pod definition the api-server won’t come and you’ll be cut off the cluster. Don’t panic! It happens and it’s not difficult to recover. First: make a backup of the file before editing. This is really important! Second: since it’s the
kubelet process that monitors the folder
/etc/kubernetes/manifests for changes, look into its logs:
journalctl -u kubelet -f
I often restart the kubelet process before looking at the logs to avoid syphoning thru thousands of lines:
systemctl restart kubelet
Remember that since 1.19
containerd is the standard container runtime for Kubernetes, so you won’t have your familiar
docker command line, instead, you can use
ctr -n k8s.io c list | grep api
k8s.io is the namespace where all the Kubernetes containers are running.
I hope it helps! Check out other articles in the CKS series here