Efficiently Expose Services on Kubernetes (part 2)

In a previous blog we reviewed some of the tools and processes that we use at Stakater to efficiently expose services on Kubernetes. Let’s quickly recap.

We checked the different service types available in kubernetes and their associated drawbacks for exposing apps outside the cluster. Using an Ingress is the resource to use for this. And for ingresses, we need to use an Ingress Controller, which actually performs the exposing and routing based on the rules defined in the Ingress definitions. And using the Nginx Ingress Controller provides the benefit of automatically creating a load balancer such as ELB in AWS. The best practice we follow however is to have two ingress controllers and consequently load balancers; one for internal apps and the other for external apps. This makes it easy to have particular sets of security rules set up on internal load balancer so that internal apps can remain securely accessible, while deployment of apps involves the easy step of only specifying which ingress controller the ingress should use. Rules in the ingress define which dns host and port the ingress maps to a particular service backend, and DNS records will map the domain names to the relevant load balancer. Manually adding DNS records for services that are deployed is a tedious task, and any manual task such as this is not scalable if it needs to be done for multiple services. For this we make use of ExternalDNS, a tool which automatically creates DNS records based off the configuration provided in the service.

Let us now look at some other tools that we use to enhance the automation of our process even further.

Xposer

We used ExternalDNS to automatically create DNS records for any services that are deployed, to remove manual effort. However, we can consider that the same kind of manual effort is also involved in creating an ingress resource for our service. Creating an ingress resource and defining the rule for mapping a dns name to a service backend is one more manual step that we can automate with the help of Xposer, developed by the Stakater team. We can provide configuration in annotations of our services, which Xposer reads and based off which it automatically creates an Ingress resource.

As an example, if we have the following annotations on our service …

apiVersion: v1
kind: Service
metadata:
labels:
expose: ‘true’
annotations:
config.xposer.stakater.com/IngressNameTemplate: ‘forecastle-ingress’
config.xposer.stakater.com/IngressURLTemplate: ‘forecastle.stakater.com’
name: forecastle

… Xposer will read in these and create an Ingress with the configured values for this service as a backend. The ingress definition will be similar to the following:

apiVersion: extensions/v1beta1,
kind: Ingress,
metadata: {
name: forecastle-ingress,
}

rules: [
{
host: forecastle.stakater.com,
http: {
paths: [
{
path: /,
backend: {
serviceName: forecastle,
servicePort: 80
}
}
]
}
}
]

Xposer also supports three variables that can be used for the ingress url and name. Since we would like to keep our configuration flexible, and automated, these variables help in reducing the amount of hardcoded values in configuration. These are the Service name, Namespace and Domain.

apiVersion: v1
kind: Service
metadata:
labels:
expose: 'true'
annotations:
config.xposer.stakater.com/IngressNameTemplate: "{{`{{.Service}}-{{.Namespace}}`}}"
config.xposer.stakater.com/IngressURLTemplate: "{{`{{.Service}}.{{.Domain}}`}}"
config.xposer.stakater.com/Domain: company.com
name: forecastle

At Stakater we follow the best practice of using these three values to construct the ingress url and name based on the values from the Service itself. This firstly reduces the chances of any typographical errors that can be introduced because of manual entry. And secondly, it helps in case a service is either renamed or moved to a different namespace. We either follow the Service.Namespace.Domain scheme or the Service.Domain scheme in case the former becomes too long or unwieldy.

There are of course some annotations that we would have liked to add to our Ingress, but if the ingress is being automatically created, how do we specify those? Most notably, we would like to specify the Ingress class as described in the previous blog post, to match either the external or internal ingress controller. This is done with a separate annotation on the service which Xposer reads and processes.

annotations:
xposer.stakater.com/annotations: |-
kubernetes.io/ingress.class: external-ingress
ingress.kubernetes.io/force-ssl-redirect: true
certmanager.k8s.io/cluster-issuer: letsencrypt-production
some.other.annotation1: some-value1
some.other.annotation2: some-value2

Securing the connection

With a HTTPS connection, all communications are securely encrypted. A certificate enables a secure connection between the web server and the browser that connects to it. At Stakater we have used a couple of methods for handling certificates. First is a nice tool, Cert-manager by Jetstack to automate the issuing and even renewal of certificates using an issuing source. With this we can use a Cluster Issuer of Let’s Encrypt which is a free, automated, and open certificate authority. The certmanager.k8s.io/cluster-issuer annotation as in the above code snippet is used to indicate this. Xposer will apply this annotation as-is on the Ingress it creates, and that will in turn get read by Cert Manager.

Another option we make use of is the AWS Certificate Manager. A certificate can be issued, and multiple additional names can be specified apart from the root domain name. Considering the conventions for Ingress URLs we discussed above, we can add additional wildcard names such as *.labs.company.com,*.tools.company.com, etc. These wildcards will be applicable to ingresses in the labs or tools namespaces with the domain company.com.The certificate can be installed on the Load balancer.

Bird’s eye view: Exposing services efficiently

Monitoring

Finally once exposed, monitoring is an important activity to make sure your service is running and to know when it is not. We use Ingress Monitor Controller developed by the Stakater team to automate registration of monitors with an Uptime checker (e.g. UptimeRobot) against Ingresses. You can read more about monitoring with Ingress Monitor Controller in one of our previous blogs here