Kubernetes: Routing Internal Services Through FQDN

Photo by Jason Rosewell on Unsplash

I remember when I was first getting into Kubernetes. Everything was new and shiny and about scale. As I continued making Cloud Native Applications running on Kubernetes I found some small paragraph that stated that Kubernetes has a built in DNS server.

Of course it does, that makes so much sense.

But now with a built in DNS server this opens up so many opportunities. Routing and masking routes in new and complex ways while still within our cluster.

In this article we are going to look at how you can make custom routes within your cluster to simplify your inter cluster communication.

If you haven’t gone through or even read the first part of this series you might be lost, have questions where the code is, or what was done previously. Remember this assumes you’re using GCP and GKE. I will always provide the code and how to test the code is working as intended.

What Does Routing Internal Services Through FQDN Look Like?

When calling external services you may be used to writing fully qualified domain names (FQDN), like the following.

// FQDN = some.url.com
// port = 80
// endpoint = /service

http.get('some.url.com/service', (response) => {

However when you are making requests within your cluster, how would you expect it to work? The pods are ephemeral, so the URL will change as often as the pod is created and destroyed. Not a solution.

You could use the external URL that is available from the load balancer exposed by the service. But then you’d be making an extra hop and wasting time and processing.

If we were wanting to communicate with our other services without making an unnecessary hop then you just need to use the internal communication scheme built into Kubernetes. Looking at part of the service yaml file we can pull out the values for the FQDN.

apiVersion: v1
kind: Service # a way for the outside world to reach the Pods
# any Pods with matching labels are included in this Service
name: service-1 # service name
# Service ports

For this service, due to the service name, the FQDN would be:


The other parts?


This is the namespace of the pods that we are targeting. Since I didn’t set a namespace the namespace is default.

You can also shorten the FQDN by removing the svc.cluster.local. Leaving you with:


Why Use FQDN Routing In Your Application?

As you will see from the example provided below, it is really simple to insert parameterized routing into your application. This is extremely helpful with Kubernetes as you might want to have slightly different routing based on environments or other rules.

Run An Internal FQDN Route

I’ve created an example project to highlight this feature. For this example I used pod environment variables and a single application to inject the necessary variables into the application so we can see how one service can call another.

value: service-2.default.svc.cluster.local
value: /service-2

And in the application code the values are injected to customize the code.

router.get('/foreign', function (req, res, next) {
const url = config.get('FOREIGN_SERVICE') + config.get('FOREIGN_PATH');
http.get(url, response => {
let data = '';
response.on('data', chunk => {
data += chunk;
response.on('end', () => {
}).on('error', err => {
throw err;

In this code I am using service-1 to call service-2 through the /foreign endpoint. I also set up the reverse so that service-2 can call service-1. You can run the code by running the following command in Cloud Shell.

$ git clone https://github.com/jonbcampos/kubernetes-series.git
$ cd ~/kubernetes-series/communication/scripts
$ sh startup.sh
$ sh deploy.sh
$ sh check-endpoint.sh service-1

This will produce an IP Address for service-1 when it is ready. If you hit the /foreign end point you will see the following result.

http://[service-1 IP Address]/foreign

You’ll see that service-1 calls service-2 directly just as easy as hitting any other endpoint. This wonderful magic makes it just a little easier when building your microservices.


Before you leave make sure to cleanup your project so you aren’t charged for the VMs that you’re using to run your cluster. Return to the Cloud Shell and run the teardown script to cleanup your project. This will delete your cluster and the containers that we’ve built.

$ cd ~/kubernetes-series/communication/scripts # if necessary
$ sh teardown.sh

Jonathan Campos is an avid developer and fan of learning new things. I believe that we should always keep learning and growing and failing. I am always a supporter of the development community and always willing to help. So if you have questions or comments on this story please ad them below. Connect with me on LinkedIn or Twitter and mention this story.