K8s: How to Test Deployed Services

Martín Saporiti
Flux IT Thoughts
Published in
5 min readJul 8, 2020

--

Kubernetes is a world in itself and when we deploy something, the first thing we want to do is “hit it” and test it. In this article, I’ll describe some simple ways to validate or test the services we deploy on a Kubernetes cluster. The aim is to be able to request a service deployed in K8s and get a response.

But first, some concepts…

According to Kubernetes official documentation, we can define four types of services¹:

  • ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
  • NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
  • ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record.

In this article, we’ll be working with a service deployed as ClusterIP, so that it can only be reachable from within the cluster. That said, for many reasons and momentarily, we may need to call the service from outside the cluster. We’ll learn how to do that. We’ll also learn how to “get inside” the cluster and call the service from another pod. All these options, depending on the context and the urgency, are really useful.

Before we move on to these alternatives and their examples, I’ll explain the architecture of the super simple application we will test.

Example Architecture

The example we’ll use to test the options we’ve mentioned consists of a simple application, a sort of API, that returns famous phrases or quotes. These quotes are stored in a Mongo database that is consumed by an application developed in GO, an application that exposes a REST service. It’s all quite basic, but it’s useful to test what the app is supposed to show.

The idea is that, after we’ve created a deployment in the K8s cluster, we request the service, the API, through a CURL command, pretending to be a frontend app or another microservice. We’ll call the service from within and outside the cluster.

The service deployment file (ClusterIP) looks like this:

The most important thing to explain here is that internally, within the container, the app listens on port 3000, and outside the contained it listens on port 80. There’s no mystery here.

We basically map the targetPort 3000 to port 80.

Let’s take a look at the deployed services:

As you can see in the previous code block, there are two deployed services: the Mongo service that handles Mongo petitions and the service that handles the quotes app petitions. Both are ClusterIP and they can only be reached from within the cluster.

Now we’ve explained the architecture and the main idea, let’s see some of our options.

Option 1: Port-Forward

This option consists in forwarding data from the port where the service is handled to a port in our computer. This option is temporary and it isn’t insecure if we consider its temporality. Besides, it requires access to the cluster through the API. The common way to do it is:

In our example, it would be:

Then, from our terminal we execute curl localhost:8080 and we get:

Great, it worked! Let’s take a look at another option:

Option 2: Bastion

A second option is to set up a (temporary) bastion within the cluster to call the service. Light and simple containers are usually used since they allow us to make a curl request:

With the previous command, we get inside the container and we can execute:

curl quotes-app-service:80

and we get:

Success, it worked out!

Before we move on to the following option, it’s worth mentioning that in the first option, we port-forwarded the container port within the cluster…that’s why we called port 3000. In the second option, we are calling the service that is in front of the pod, so we call it through port 80.

Option 3: NodePort

This option consists in temporarily changing the ServiceType so that it becomes a NodePort type. It is moderately insecure because, in case of an oversight, we would have a service exposed outside the cluster and, depending on the context, this may not be convenient. Thus, we have to modify the service definition in the .yml file. It will look like this:

Having applied those changes, we execute:

We can now hit the EXTERNAL-IP (and port 3001) and get a response. But what happened? Why does it say <none>? Since this example is implemented using Minikube, which is a made-up cluster, we have to take a different step. We have to get the cluster IP:

and now we can call curl http://192.168.64.3:30001/ and get:

Hit, it works!

Bonus: Ingress

To conclude, let’s take a look at one more option: configuring an ingress in front of the service, temporarily, so we can test it. Ingress is the entry point for the cluster. Each cluster has its own Ingress implementation. We’ll create it, thus, with a yml file:

It’s important to mention that if we’re testing it locally, we have to modify the /etc/host file so that quotes-app.info is mapped to our IP.

Then, we can run curl quotes-app.info/ and get a quote.

Conclusion

This is nothing new: Kubernetes is terrific, it simplifies the application deployment through containers and allows devs (or other talents who are willing to do it) to put software into production in a super simple way. This article is my humble contribution to everyone who is learning K8s and has come across the question “how do I know if the service I deployed is running?”

You can find the source code I used here: https://github.com/fluxitsoft/quotes-k8s-sample

¹ https://kubernetes.io/docs/concepts/services-networking/service/

Know more about Flux IT: Website · Instagram · LinkedIn · Twitter · Dribbble · Breezy

--

--