Communication inside a Kubernetes Pod — Why do we need multi-container pods?

Approaching multi-container patterns in a pod from an “In-process” vs “Out-of-process” design philosophy.

Abhinav Kapoor
CodeX
6 min readJun 9, 2022

--

In order to appreciate the need for multi-container pods, I would like to step back a bit and start by talking about applications in general, in a container agnostic environment.

Traditionally an application would consist of a single process which has business logic as well as cross-cutting & infrastructure concerns (like logging, resilience, configurations). Typically there would be SDK libraries for the purpose(for example .NET Polly which is a fault handling library) & the application is implemented around such libraries, mostly using dependency injection. Or there is a custom implementation. Both approaches follow an “in-process” design.

When we are putting application and cross-cutting concerns together we are coupling business logic with operational/infrastructure logic we are creating a high coupling & low cohesion design.

But an “in-process” design has its benefits —

  1. Straight forward implementation & debugging.
  2. Simple deployment & monitoring.
  3. Fewer connections/moving parts can develop a fault.
  4. Extremely low latency. Depending upon where the other process is running (same machine or remote machine), there shall be some degree of latency.

Therefore it could be a good design for simpler services, or when the service has a short life. Or when reducing the latency is a critical NFR.

On the other hand, as the service starts to grow over time the “in-process” design starts to show some issues —

  1. As its a single process, it means that the runtime of the SDK must be compatible with the runtime of the application. This could be an issue if there is are better-suited SDK but needs a different platform/runtime. Or if we want to migrate the application to a different platform that is not supported by SDK.
  2. Such a design demands a code change even when one of the peripherals changes. So it’s not an option when the code is either unavailable or the skills needed to change the code have been lost.
  3. If an application is built over several 3rd party SDKs keeping up with the versions of SDK can be a challenge.
  4. In the case of custom implementations, the application code is bloated with infrastructure code. Which may be duplicated across all applications.

As an alternative, the other possibility is to take out the cross-cutting concerns into a separate process which supports the service thereby creating an “out-of-process” design. Where the application itself is only responsible for its business goal and is talking to separate processes for cross-cutting concerns.

Example: In-process architecture Vs Out-of-Process architecture

As illustrated in the image above, out of process architecture facilitates an extensible, flexible, scalable, loosely coupled architecture which has several benefits-

  1. The cross-cutting processes can be in any technology as long as communication with the main application is possible.
  2. Since it’s easier to mix technologies, it’s possible to bring the best-suited tool for each job rather than looking for a good generic solution.
  3. The cross-cutting process can be updated, or replaced without affecting the main application.
  4. It enables autonomous scaling.

Now in the context of Kubernetes, a pod is the smallest entity in Kubernetes, each pod has a unique IP address. A Pod is like a virtual machine for the container, so a container can access Pod’s network and storage.

A Pod hosting a container which running listening at port 4000

And a pod can have one or more containers. The intention of having more than one container is the same as that of an “out-of-process” design (except for scaling as Kubernetes uses Pod as the smallest unit to scale).

And in a Kubernetes environment, there are all the more reasons to keep the cross-cutting/infrastructure concerns out of application because we need state of the art tools to support observability, apply security & compliance policies, and deal with traffic management. This implies that it should be possible to switch infrastructure technologies without changing application code.

The common pattern to implement such an out of process design it is to have the main application packaged as an independent container. And have the helper services packaged as separate containers.

Example — Multiple containers within the same pod, sharing network namespace and storage.

For example, in the image above, the application container talks to the outside world using a proxy, and since both the containers are in the same network namespace, the proxy is accessible as localhost. And the logs can be collected by a log agent (a data puller) using a common storage volume (which the log agent can stream to a log collection server).

In essence, the main application process and helper processes are co-located, co-managed, and have the same lifetime. They share the same fate in a pod & managed by Kubernetes as a single unit. Also, note that the helper applications have no purpose other than supporting the main application. Such a pattern is called the side-car pattern.

A motorcycle with a sidecar. The sidecar has the purpose to enhance the motorcycle’s capabilities. And it has no significance without the motorcycle. Photo by Philipp Potocnik on Unsplash

Now, we address the ambassador-sidecar pattern —

The Ambassador of a country or software is an entity that talks to the outside world on behalf of the country/software, representing the country/software.

While the Ambassador pattern is an independent pattern, however, when it’s implemented as a helper container inside the same pod as the main application, it’s a specialisation of a sidecar pattern because of its purpose and lifetime.

Let’s take a couple of minutes to appreciate such a proxy which is running alongside the main application as a sidecar in each pod because this is a distributed proxy (unlike a central proxy). It does not bring the latency of a central hop or remote proxy, and it’s a good place to monitor traffic and apply any network management rules.

If all the communication within all the pods is taking place via such proxies, then all the communication is observable and actionable.

An ambassador sidecar running along with the main application.

Summary

In this write up I took a container agnostic approach to cover how an “in-process” design simplifies the task at hand but creates a high coupling, low cohesion design. Whereas an “out-of-process” design promotes high cohesion, low coupling design at the cost of complexity. So it’s essential to consider the tradeoff to decide what is appropriate and what is overkill.

Sidecar is a pattern to implement out of process architecture. Containers in a pod work well as sidecar as they share network and storage, they are co-located & co-managed. They are all the more desirable because we want to keep state of art infrastructure without changing applications.

Using a sidecar proxy inside a pod gives rise to a distributed proxy at a cluster level, a concept exploited by service mesh.

I hope you found the write-up useful, please let me know your thoughts.

--

--

Abhinav Kapoor
CodeX

Technical Architect | AWS Certified Solutions Architect Professional | Google Cloud Certified - Professional Cloud Architect | Principal Engineer | .NET