Episode 2: Configure Linkerd for your application running in Kubernetes

Mir Shahriar Sabuj
6 min readAug 8, 2018

--

This series contains four episodes of a single story. A story of Linkerd as a service mesh which can handle tens of thousands of requests per second and balance traffic across the application.

We are going through the story step-by-step how Linkerd works alongside an application running in Kubernetes. If you missed the first episode, you can always spare some time to check that out.

These four episodes are in the following sequence:

  1. Linkerd as a Service Mesh
  2. Configure Linkerd (this episode)
  3. See Linkerd works
  4. Do not waste telemetry

Linkerd is configured via a JSON or YAML formatted configuration file to receive and route data intelligently with proper load balance. — Episode 2

Linkerd configuration has five top level section:

  1. admin
  2. routers
  3. namers
  4. telemetry
  5. usage

In this episode, we will learn about routers and namers sections. In the upcoming episode, we will try to learn about admin & telemetry sections.

Linkerd Configuration

Linkerd’s main job is routing: accepting a request and sending that request to the correct destination.

When Service A wants to send request to Service B, it first sends request to a Linkerd in same host. This Linkerd then forwards request to next Linkerd which has a neighboring Service B. And finally, Linkerd forwards this request to Service B in same host. Neighbourhood is formed by all services located in same host.

This routing process is controlled by some Linkerd configuration. Lets look at some basic configuration of routers and namers.

All configurations must define a routers key, the value of which must be an array of router configurations. Routers also include servers, which define their entry points, client, which configures how clients are built, and service, which configures service level policy.

We will learn about some basic parameters of the routers section.

protocol

Linkerd supports either http, h2, thrift, or mux for each router.

servers

To accept request, we need to define servers as entry points of routers.

servers:
- port: 4041
ip: 0.0.0.0
maxConcurrentRequests: 250

Here, router receives 250 requests concurrently through 4041 port and process each request in 4 steps.

  1. Identification
  2. Binding
  3. Resolution
  4. Load balancing

Identifier

Identifiers are responsible for creating ‘service name’ from an incoming request; these names are then matched against the dtab.

Note: service name is also called logical name or logical path.

Lets see how identifier creates logical names.

identifier:
kind: io.l5d.header
header: my-header

With this identifier, a service name of format /{dstPrefix}{*headerValue} is formed from each HTTP request. If no header is provided with request, the service name will be /{dstPrefix} which is actually /svc as default value of dstPrefix is svc. This is the process of Identification.

dtabs

Delegation tables (dtabs for short) are lists of routing rules that take a “service name” and transform it into to a “client name”.

Note: client name is also called concrete name.

This process is known as binding. Lets see how dtabs rules are used.

/svc  => /port/http;     .... 1
/port => /ns/cred; .... 2
/ns => /srv; .... 3
/srv => /#/io.l5d.k8s; .... 4

When a service name /svc/auth is assigned to a request, following transformation happens for above dtabs rules.

/svc/auth             => /port/http/auth                  (1)
/port/http/auth => /ns/cred/http/auth (2)
/ns/cred/http/auth => /srv/cred/http/auth (3)
/srv/cred/http/auth => /#/io.l5d.k8s/cred/http/auth (4)

Here, client name /#/io.l5d.k8s/cred/http/auth is generated.

namers is one of the top section of configuration parallel to routers. A namer binds a client name to a physical address.

Interpreter

An interpreter determines how client names are resolved. The default interpreter resolves client names via the configured namers. This process is known as Resolution.

interpreter:
kind: default
transformers: ...

Later this post, we will learn about namers configuration.

Interpreter comes with transformers which is used to filter resolved IPs for client name. In this episode, we will learn about two transformers.

  1. io.l5d.k8s.daemonset

This transformer maps each destination address to a member of a given Daemonset that is on the same /24 subnet. Since each Kubernetes node is its own /24 subnet, the result is that each destination address is mapped to the member of the daemonset that is running on the same node. This can be used to redirect traffic to a reverse-proxy that runs as a daemonset.

2. io.l5d.k8s.localnode

This transformer filters the list of addresses down to only addresses that have the same IP address as localhost. The IP of localhost is determined by doing a one-time DNS lookup of the local hostname. This transformer can be used by an incoming router to route traffic only to local destinations.

Client

This client defines how destination clients will be configured. Some important parameters are defined under this client.

  • loadBalancer
  • requeueBudget (connection level retires)
  • clientSession (behavior of established client)

Namers

A namer binds client name to physical address. Linkerd provides support for service discovery via Kubernetes.

namers:
- kind: io.l5d.k8s
host: localhost
port: 8001

With this namer, all client names are resolved by Kubernetes service discovery.

The Kubernetes namer does not support TLS. Instead, you should run kubectl proxy on each host which will create a local proxy for securely talking to the Kubernetes cluster API

According to the above configuration, Kubernetes cluster API is accessible using localhost:8001. In the next episode, we will learn more about this.

Example with our application

As an example, lets say, Student sends request to Auth with appropriate header value auth. Identifier creates service name /svc/auth for each request for following configuration.

identifier:
kind: io.l5d.header.token
header: my-header

This /svc/auth is then transformed by dtabs rules mentioned earlier and generates client name /#/io.l5d.k8s/cred/http/auth.

We have configured namers and interpreter as following.

namers:
- kind: io.l5d.k8s
host: localhost
port: 8001
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: linkerd
port: driver-l2
service: l5d

The client name /#/io.l5d.k8s/cred/http/auth is then resolved by namers and provides a list of endpoints of service auth in namespace cred. These IPs are then mapped to L2 Linkerd pod IPs by io.l5d.k8s.daemonset transformer.

Auth: 10.40.21.16 -> L2 Linkerd: 10.40.21.15
Auth: 10.40.19.5 -> L2 Linkerd: 10.40.19.12

So, each request received by L1 Linkerd from Student will be forwarded to either of these two resolved IPs of L2 Linkerd (10.40.21.15 or 10.40.19.12). Now L2 Linkerd is responsible to forward each request to Auth.

we level Linkerd as L1 and L2. Linkerd who receives calls from neighboring service is L1 Linkerd and who finally sends messages to its neighboring service is L2 Linkerd

L2 Linkerd receives request and process that request as before. This router also uses same dtabs. So similar /#/io.l5d.k8s/cred/http/auth is formed as client name. This client name is then resolved by following configuration.

namers:
- kind: io.l5d.k8s
host: localhost
port: 8001
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode

All IPs resolved by namers are filtered by io.l5d.k8s.localnode transformer.

The localnode transformer filters the list of addresses down to only addresses that are on the same /24 subnet as localhost.

As a result, L2 Linkerd forwards each request to pods of Auth service which are in the same node.

Glance of Application Setup

Deploying Linkerd as Kubernetes DaemonSet will ensure that all nodes run a copy of a Linkerd pod. Each service communicates with L1 Linkerd hosted in same node.

Upcoming Episode: See Linkerd works with your application running in Kubernetes.

--

--