How to Debug Kubernetes App Errors Like a Pro 3/3

JJotah
5 min readMay 1, 2023

--

Continuing with our debugging journey, let’s move on to debug the service and the ingress side!

As we saw in my first blog post, I like to represent Kubernetes as a layered onion. We have previously verified that Kubernetes has services that expose applications. Let’s first collect the service IP.

kubectl get svc

Then we reconnect to the Alpine pod and check if the service responds.

In this case, it isn’t responding, so the service may have a problem. The first thing we should do is check the service definition.

With this service definition, the more experienced may have already noticed the problem when creating the deployment instead of a pod in the previous lesson. But for newcomers, let me explain how the service works quickly.

The service points to the deployment or pod with the selector and labels. In this case, the service is pointing to “run: nginx-p,” while when we created the deployment, we put “app: nginx.”

To confirm this theory, we will execute:

kubectl get deploy -l app=nginx-p
kubectl get deploy -l app=nginx

Knowing what the problem is, we will solve it quickly by editing the service online. If you don’t trust this, you always have the option to change the service definition file and apply it.

kubectl edit svc SVC-NAME

Once saved or applied, we reconnect to the Alpine pod and test the connection via the svc IP.

kubectl get svc
kubectl exec -it alpine sh
curl SVC-IP

Now that our service is working correctly, we will focus on the ingress part pointing to the service.

We check the ingress definition.

Here we see the same as with the service, that the ingress has “nginx-p” as the service name and our service is called “nginx,” and the port is 80, the same port the service uses. Therefore, we need to change the service name to the correct one and check it.

Once the ingress is changed and saved, we check that everything works correctly from outside the cluster (from our local machine).

curl test.jjotah.com

To show you that it works correctly, we will change the message of Nginx to display it on the web and console

CONCLUSIONS:

  1. To debug, we will follow the methods from top to bottom and from bottom to top based on the first part that I published.
  2. You need to understand how the application works, the code, what requirements it needs, and more.
  3. Just because the application is running doesn’t mean that it works correctly. That is, if the application is running, it may not have error control, the port may not be exposed correctly, or several other reasons. Never trust that the application works correctly (provided you have reviewed everything in your scope).
  4. External factors to Kubernetes are essential. Factors such as CDNs, Firewalls, networks, storage, and others that are external to Kubernetes are essential to the cluster’s operation. We will have to review everything that may affect the application’s use.
  5. The load balancer and DNS are also important for the application’s operation, so we need to check that the ingress-controller created the load balancer and target groups well, the ports exposed in the load balancer, and the ingress.
  6. Basic components. Although it is rare, we have to keep in mind that the etcd, apiserver, controller-manager, scheduler, and nodes may also fail. If you ever see that they are failing and do not see any type of error in the ingress-controller, etc… it may be something generic to Kubernetes, and I always recommend going to the apiserver and others to see the cluster’s behavior.
  7. Network plugin. It is true that, like the basic components, it is complicated to fail, especially if the networks are managed by a public cloud. However, based on my on-premises experience, it is a risk factor that we must consider on all occasions.

The best thing about this technique is that we covered all the basic points, and in the future, I will work to create new blogs by examining each resource individually. The bottom-up method can even go further down by connecting to the nodes and executing commands at the container level using tools such as crictl or docker to see the container’s real-time performance instead of the pod. By going layer by layer, we can debug the application more effectively and understand where Kubernetes might be failing.

If you have any suggestions, improvements, or ideas for future blog posts, please feel free to leave your comments. And if you need urgent help, you can always contact me on the official Kubernetes Slack channel at kubernetes-users, mentioning me (@JJotah), and I’ll try to respond and assist you as soon as possible.

Thank you very much for reading the posts, and I hope it was helpful.

--

--

JJotah

Cloud Architect | Blogger | Kubernetes Enthusiast | CKS | CKA | CKAD | Terraform | Golang | LinkedIn: Juan José Ruiz | GitHub: JJotah