Kubernetes “x509: certificate has expired or is not yet valid” error
This is going to be a short article and I want to write it because of the pain that it took me to fix this error.
Background
In a beautiful morning of spring you wake up and try to use kubectl in order to check the status of your Kubernetes (K8s) pods. The thing is: you can’t run the command because you got the following error:
x509: certificate has expired or is not yet valid
So, as usual, you say: “WTF!??! It’s been working normally for one whole year and I didn’t change anything. Why is this happening now??”
The problem is that some certificates used by K8s last only for one year and, usually, you don’t set it automatically because 🤷🏻♂️…maybe something can break when some new update comes or any other reason.
And then you spend hours searching different forums for an answer but nothing works. You get a cup of coffee. You go for a walk. You pray. Nothing.
This is usually how my days went by when I have to use K8s. Probably it’s because of the lack of deep knowledge of it, but hey!, there’s only so much time in life to be spent with K8s.
Enough of venting 😅, let’s get straight to how to fix it. Firstly, let me describe what is the current state of this application and how I’m using K8s with it.
Environment
I’m running a simple application using a local instance of Kind (Kubernetes In Docker), which facilitates the process of spinning up a K8s cluster, and I’m also using kubectl as the CLI for sending commands to the cluster.
So, the first problem to solve is to know which certificates are expired and how to renew them. For that, you can use kubeadm. It is as simple as:
kubeadm certs check-expiration
To check for the expiration, and:
kubeadm certs renew all
To renew all of them.
You can run
kubeadm certs renew --help
for more options.
“But where do I run this command?” You may ask. And yes, that’s a good question.
In the environment of this article, as the K8s is running inside a Docker instance (because of, well, kind), you have to actually go inside the container and run it there because it is where the root cluster is.
Steps to fix
First get the container_id:
docker ps -a
Then exec interactively in it with bash:
docker exec -it <container_id> bash
Now, run the kubeadm certs renew all
command and, finally, restart kubelet
for the changes to take effect:
systemctl restart kubelet
This should do the job. Right? Well, in my case it didn’t. You will probably see an error like this when trying to run kubectl
outside of the container:
error: You must be logged in to the server (the server has asked for the client to provide credentials)
As outside the docker instance I only have kubectl
installed and not K8s, I don’t have the ~/.kube/config
file defined (even if I did, the certificates would not be updated automatically).
In this case, you have to copy the new certificates that were generated by the kubeadm
command.
Inside the docker instance, go to /etc/kubernetes
and copy the client-certificate-data
and client-key-data
values from the file admin.conf
.
Now, outside the docker instance, replace these values in your ~/.kube/config
. This file looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <some-base64encoded-value>
server: https://127.0.0.1:40739
name: cluster-name
contexts:
- context:
cluster: cluster-name
user: ubuntu
name: kind-cluster-name
current-context: kind-cluster-name
kind: Config
preferences: {}
users:
- name: ubuntu
user:
client-certificate-data: <update-the-value-here>
client-key-data: <and-here>
With this configuration you should now have a cluster with updated certificates and be able to run kubectl
commands outside of the docker instance.
Observation:
Be aware, on the file above, if the server is pointing to your actual docker kind instance host:port. To know which port it is running (probably won’t change and it will be 40739), do:
docker ps -a
And then check the port mapping (something like 127.0.0.1:40739->6443/tcp
).
Final words
When using K8s, run! 🏃🏻💨😅