Linkerd as a Service Mesh for your application in Kubernetes Cluster

Mir Shahriar Sabuj
Aug 2, 2018 · 4 min read

This series contains four episodes of a single story. The story is about a production ready service mesh that adds reliability, security, and visibility to cloud native applications.

A story of Linkerd as a service mesh which can handle tens of thousands of requests per second and balance traffic across the application. Linkerd works great in the kingdom of Kubernetes for the integrated service discovery system and the easy horizontal scaling.

We will know the story step-by-step how Linkerd works alongside an application running in Kubernetes.

These four episodes will come in the original sequence:

  1. Linkerd as a Service Mesh (this episode)
  2. Configure Linkerd
  3. See Linkerd works
  4. Do not waste telemetry

Linkerd can be used as a service mesh to manage, control and monitor service-to-service communication within application among multiple services. — Episode 1

What is a Service Mesh?

A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. In practice, the service mesh is typically implemented as an array of lightweight network proxies that are deployed alongside application code, without the application needing to be aware.

Linkerd as a Service Mesh

Linkerd runs as a separate standalone proxy. As a result, it does not depend on specific languages or libraries. Applications typically use Linkerd by running instances in known locations and proxying calls through these instances — i.e., rather than connecting to destinations directly, services connect to their corresponding Linkerd instances, and treat these instances as if they were the destination services.

The real benefits of Linkerd

  1. has the ability to work with http, http/2, gRPC, thrift, and mux protocols.
  2. provides an uniform application-wide layer for adding TLS.
  3. reduces tail latency by distributing traffic intelligently using “least loaded”, “EWMA”, and “aperture” load-balancing algorithms.
  4. enables dynamic request routing and traffic shifting.
  5. provides distributed tracing.

In the kingdom of Kubernetes

How Linkerd will work as a service mesh alongside your application in Kubernetes cluster?

To answer this question, we need to know the network flow within your application. We will imagine a sample application with four services.

From this diagram, we get a picture of network flow within the application. Though we can see routes in this diagram, Lets make it more clear.

  • Student receives gRPC connections from outside network
  • Student sends http requests to other three services
  • Number sends gRPC requests to Score

To form service mesh, Linkerd will sit between any two services and take full responsibility of any service-to-service communication. Instead of talking to each other directly, a service will rely on service mesh by passing its requests to its neighboring Linkerd. The job of Linkerd is to forward these requests to the destination service. But to keep proper balance of traffic, this Linkerd will extend the job to another Linkerd who will forward the original messages to its neighboring destination service.

Simple! Right?

Linkerd can be run as sidecar but it has a downside. Deploying Linkerd per pod means resource costs scale per pod. Multiple pods for a lightweight service will cost quite high. An alternative solution is to run Linkerd per node which allows resource costs scale per node.

In this service mesh setup, each node will have a Linkerd pod with one or more instance of all four services. Each node will form a separate neighborhood.

There will be no service-to-service direct communication across the neighborhoods, only Linkerd is allowed to talk to another Linkerd.

From this moment, we will level Linkerd as L1 and L2. Linkerd who receives calls from neighboring service is L1 Linkerd and who finally sends messages to its neighboring service is L2 Linkerd.

Note: These L1 & L2 levels are imaginary

Lets see step-by-step to understand this service mesh better.

  1. A gRPC request comes to Student to set number.
  2. Student calls /v1/auth/{id} to authenticate the request. With Linkerd alongside the application, Student doesn’t call Auth directly but L1 Linkerd hosted in same node as a neighbor. L1 Linkerd then manages this request to load balance among other L2 Linkerd across neighborhoods.
  3. Finally, L2 Linkerd sends this http request to its neighboring Auth.
  4. When the request is authenticated, Student send a http post request to Number via Linkerd.
  5. Student -> L1 Linkerd -> L2 Linkerd -> Number
  6. Number then sends a gRPC request to Score
  7. This request also goes to Score via double layer Linkerd

Glance of Linkerd Setup

Deploying Linkerd as Kubernetes DaemonSet will ensure that all nodes run a copy of a Linkerd pod.

Pods in the DaemonSet can use a hostPort, so that the pods are reachable via the node IPs. Clients know the list of node IPs somehow, and know the port by convention.


Thanks to Kazi Sadlil Rhythom

Mir Shahriar Sabuj

Written by

Software Engineer @ Pathao

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade