Tushar Sappal
Aug 6, 2018 · 6 min read

The post is a brief extract of the Container Design Patterns that I went through while studying Docker Containers and Kubernetes.

Introduction

In the late 1980s and early 1990s, object-oriented programming revolutionised software development, popularising the approach of building of applications as collections of modular components. Today we are seeing a similar revolution in distributed system development, with the increasing popularity of micro service architectures built from containerized software components.

Containers are particularly well-suited as the fundamental “object” in distributed systems by virtue of the walls they erect at the container boundary.

This post describes three types of design patterns that we have observed emerging that we have observed emerging in container-based distributed systems: single-container patterns for container management, single-node patterns of closely cooperating containers, and multi-node patterns for distributed algorithms.

It is fortunate that last few years have seen a dramatic rise in adoption of Linux container technologies like Docker etc . The container and the container image are exactly the abstractions needed for the development of distributed systems patterns. Containers being hermetically sealed , and carrying their required dependencies with them , providing atomic deployments , they dramatically improve process of deploying software in datacenter or the cloud .

Single Container Management Patterns

The container provides a natural boundary for defining an interface . Containers can expose not only application-specific functionality, but also hooks for management systems, via this interface.

The traditional container management interface is extremely limited. A container effectively exports three verbs: run(), pause(), and stop(). Though this interface is useful, a richer interface can provide even more utility to system developers and operators. And given the ubiquitous support for HTTP web servers in nearly every modern programming language and widespread support for data formats like JSON, it is easy to define an HTTP based management API that can be “implemented” by having the container host a web server at specific endpoints, in addition to its main functionality.

In the “upward” direction the container can expose a rich set of application information, including application-specific monitoring metrics (QPS, application health, etc.), profiling information of interest to developers(threads, stack, lock contention, network message statistics, etc.), component configuration information, and component logs. For e.g Kubernetes container management system allow users to define health checks via specified HTTP Endpoints .

In the “downward” direction, the container interface provides a natural place to define a lifecycle that makes it easier to write software components that are controlled by a management system. For example, a cluster management system will typically assign “priorities” to tasks, with high-priority tasks guaranteed to run even when the cluster is oversubscribed. This guarantee is enforced by evicting already-running lower-priority tasks, that will then have to wait until resources become available.

Single-Node , Multi Container Application Patterns

Beyond the interface of a single container , there can be design patterns that span multiple containers.

Sidecar Pattern

Most common pattern for multi-container is ‘Sidecar Pattern’. Sidecar pattern extends and enhance the main container . For example, the main container might be a web server, and it might be paired with a “logsaver” sidecar container that collects the web server’s logs from local disk and pushes them to Datadog or Splunk Forwarder .

Sidecar Pattern

Well it is always possible to build the functionality of a sidecar container into the main container , there are benefits of using separate containers. The container is a unit of resource accounting and resource allocation , so webserver can be configured in such a way that it provides consistent low latency responses to the requests which fired on it , whereas the “logsaver” container is configured to scavenge spare CPU cycles when webserver is not busy . Second the container is the unit of packaging , so separating serving the web requests and saving the log into different containers makes it easy to divide the responsibility of developing these individual containers between different teams . Third container is the unit of reuse , so sidecar containers can be paired with numerous different “main” containers. Fourth the container provides failure containment boundary , making it possible for the overall system to degrade gracefully (for example, the web server can continue serving even if the log saver has failed). Lastly, the container is the unit of deployment , which allows each piece of functionality to be upgraded and, when necessary, rolled back, independently.

Ambassador Pattern

Ambassador containers proxy communication to and from a main container. For example, a developer might pair an application that is speaking the memcache protocol with a twemproxy ambassador. The application believes that it is simply talking to a single memcache on localhost, but in reality twemproxy is sharding the requests across a distributed installation of multiple mem- cache nodes elsewhere in the cluster. This container pattern simplifies the programmer’s life in three ways: they only have to think and program in terms of their application connecting to a single server on localhost, they can test their application standalone by running a real memcache instance on their local machine instead of the ambassador, and they can reuse the twemproxy ambassador with other applications that might even be coded in different languages. Ambassadors are possible because containers on the same machine share the same localhost network interface.

Ambassador Pattern

Adapter Pattern

In contrast to the ambassador pattern, which presents an application with a simplified view of the outside world, adapters present the outside world with a simplified, homogenised view of an application. They do this by standardising output and interfaces across multiple containers. A concrete example of the adapter pattern is adapters that ensure all containers in a system have the same monitoring interface. Applications today use a wide variety of methods to export their metrics (e.g. JMX, statsd, etc). But it is easier for a single monitoring tool to collect, aggregate, and present metrics from a heterogenous set of applications if all the applications present a consistent monitoring interface.

Adapter Pattern

Multi Node Application Patterns

Lets move beyond co-operating containers on a single machine/ node to co-ordinating multi node distributed architecture

Leader election pattern

One of the most common problems in distributed systems is leader election. While replication is commonly used to share load among multiple identical instances of a component, another, more complex use of replication is in applications that need to distinguish one replica from a set as the “leader.” The other replicas are available to quickly take the place of the leader if it fails. There are many libraries which provide the Leader Election Mechanism from the application side .

An alternative to linking a leader election library into the application is to use a leader election container. A set of leader-election containers, each one co-scheduled with an instance of the application that requires leader election, can perform election amongst themselves, and they can present a simplified HTTP API over localhost to each application container that requires leader election (e.g. becomeLeader, renewLeadership, etc.). These leader election containers can be built once, by experts in this complicated area, and then the subsequent simplified interface can be re-used by application developers regardless of their choice of implementation language.

Scatter Gather Pattern

In such a system, an external client sends an initial request to a “root” or “parent” node. This root fans the request out to a large number of servers to perform computations in parallel. Each shard / server returns partial data, and the root gathers this data into a single response to the original request. This pattern is common in search engines. Developing such a distributed system involves a great deal of boilerplate code: fanning out the requests, gathering the responses, interacting with the client, etc.

Scatter Gather Pattern

References

[1]:- Docker Engine :- http://www.docker.com

[2]:- Kubernetes, http://kubernetes.io

[3]:- https://linuxcontainers.org/

[4]:- Docker Swarm https://docker.com/swarm

Tech Bits

It's all about tech !

Tushar Sappal

Written by

https://tusharsappal.github.io/

Tech Bits

Tech Bits

It's all about tech !

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade