Structural container patterns in Kubernetes

Atlantbh
Atlantbh Engineering
6 min readFeb 16, 2022

If you have a cloud-native microservices application, you will probably think of using a containerized approach together with Kubernetes for orchestration. In order to use containers properly, you should take into account container design patterns.

So, the first question would probably be “What are design patterns ?” Design patterns help us solve particular problems with containers. In Kubernetes, there are three design patterns that should be mentioned:

  • Foundational
  • Health Probe
  • Predictable Demands
  • Automated Placements
  • Behavioral
  • Batch Job
  • Daemon Service
  • Stateful Service
  • Structural
  • Init Containers
  • Sidecar

In this blog, we are going to discuss structural container patterns, one of the most important patterns to understand when running cloud-native applications. If you are trying to find the best solution to organize your containers in a pod, this is the right place for you.

Let’s dive into our topic and learn everything about structural container patterns. We are going to provide examples for a better understanding.

Structural container patterns

The main idea of this container pattern is not to execute completely different jobs in a single container. For example, if we have two different tasks which have to be done, we should create separate containers for those tasks.

There are four different types of structural container patterns:

  • Init Containers
  • Sidecar
  • Adapter
  • Ambassador

Init Containers

Init containers are containers that run before other containers in the pod. The main idea of init containers is to separate the life cycle of initialization-related jobs. We can have N number of init containers which should be terminated successfully before running the main containers and they are executed in a sequential fashion.

There are some examples of init containers usage in the real world:

  • For example, if you need to seed the PostgreSQL database before starting it, you would probably want to use the init container. Steps for doing so are:
  1. Download SQL script which is going to be executed
  2. Script location is going to be /docker-entrypoint-initdb.d
  3. The main container is going to mount /docker-entrypoint-initdb.d to an emptyDir volume.

After downloading the SQL script, the main container starts and runs everything from the /docker-entrypoint-initdb.d folder. As per official PostgreSQL Docker image documentation, all *.sql, *.sql.gz, or *.sh under /docker-entrypoint-initdb.d are going to be executed on startup.

apiVersion: v1

kind: Pod

metadata:

name: psql

labels:

app:

spec:

initContainers:

- name: fetch

image: mwendler/wget

command: ["wget","--no-check-certificate","https://sample-sql-queries.com/init_database.sql","-O","/docker-entrypoint-initdb.d/dump.sql"]

volumeMounts:

- mountPath: /docker-entrypoint-initdb.d

name: dump

containers:

- name: psql

image: postgres:latest

env:

- name: POSTGRES_PASSWORD

value: "example"

volumeMounts:

- mountPath: /docker-entrypoint-initdb.d

name: dump

volumes:

- emptyDir: {}

name: dump
  • Let’s say you want to ensure that database migrations will pass properly on the production database before deploying the application to production. You will simply create an init container that is going to dump the production database, restore it to a lower environment database and finish execution. After init container execution, actual application containers are going to start and run migrations on the restored production database. In our specific case, our script is injected as a volumeMount in /scripts folder and it is named dump_and_restore_database.sh.
apiVersion: v1

kind: Pod

metadata:

name: myapp-pod

labels:

app: myapp

spec:

initContainers:

- name: database-dump-restore

image: postgres:11

volumeMounts:

- name: dump-restore-postgres

mountPath: /scripts

command: ['sh', '-c', '/scripts/dump_and_restore_database.sh']
  • If you are running a database container along with an application container in your Kubernetes cluster, you do not want your application to start before database initialization. Again, the init container comes to the rescue by executing pg_isready in our specific example causing application containers to wait until the database is ready.
apiVersion: v1

kind: Pod

metadata:

name: myapp-pod

labels:

app: myapp

spec:

initContainers:

- name: check-pg-ready

image: postgres:11

command: ['sh', '-c',

'until pg_isready -h {{ include "tempmonitoring.databaseHost" . }} -p {{ .Values.database.port }};

do echo waiting for database; sleep 2; done;'

]

Sidecar

If you want to extend the functionality of your existing container without changing it you will probably want to use a sidecar container pattern. We are placing the sidecar container in the same pod as our main container because in most cases they will use the same resources. For example, you have a NodeJS backend application that generates logs in a file. Now, it would be nice to ship these logs to some log management tool to enable developers to read logs in case of some debugging. In the following example, we are going to ship a NodeJS application to the Kubernetes cluster together with a Datadog agent sidecar container.

As you can see in the graph above, we have two containers (NodeJS API and Datadog Agent) that share the /usr/src/app/logs folder. This folder holds app.log file which is going to be monitored by Datadog Agent and content will be shipped to app.datadoghq.com

apiVersion: apps/v1

kind: Deployment

metadata:

name: nodejs-datadog-logging

labels:

app.kubernetes.io/name: nodejs-logging

app.kubernetes.io/instance: nodejs-logging

spec:

replicas: 1

selector:

matchLabels:

app.kubernetes.io/name: nodejs-datadog-logging

app.kubernetes.io/instance: nodejs-logging

template:

metadata:

labels:

app.kubernetes.io/name: nodejs-datadog-logging

app.kubernetes.io/instance: nodejs-logging

spec:

serviceAccountName: default

volumes:

- name: logs

emptyDir: {}

containers:

- name: nodejs-datadog-logging

image: "registry.example.com/nodejs-datadog-logging:v1"

imagePullPolicy: Always

ports:

- name: http

containerPort: 3000

protocol: TCP

volumeMounts:

- name: logs

mountPath: /usr/src/app/logs

- name: "nodejs-logging-datadog-agent"

image: "datadog/agent:7"

imagePullPolicy: Always

envFrom:

- configMapRef:

name: "nodejs-logging-datadog-agent-env"

- secretRef:

name: "nodejs-logging-datadog-agent"

volumeMounts:

- name: logs

mountPath: /usr/src/app/logs

- name: nodejs-datadog-agent-config

mountPath: /etc/datadog-agent/conf.d/nodejslogging.d/conf.yaml

subPath: config

Adapter

The adapter container pattern is a variation of the sidecar pattern. We can use an adapter pattern for logs parsing. Let’s say we have a pod with 3 containers. Two containers act as APIs and they are written in different programming languages and they have different formats of logs. The third container, adapter, takes raw logs, makes standardization, and stores them in a centralized place. After standardization, these logs are going to be processed by a log processor, e.g. Datadog.

Ambassador/Proxy

Ambassador pattern, also called Proxy pattern is widely used for hiding complexity and proxying connections from our app container to external services (outside the pod). For example, if you have microservices architecture and they communicate with each other you would probably want to use this pattern. When this architecture is implemented, the main application does not have a responsibility to connect directly to external services. Since our containers will be in the same Docker network, every microservice is going to be able to query other microservices by DNS name.

Another example for Proxy container pattern would be the main container connection towards multiple Redis cache servers. As we can see in the figure below, we have main and ambassador containers. Ambassador container listens to connections and relays them to one of the Redis servers.

Blog by Mujo Hadzimehanovic, Software Engineer at Atlantbh

Originally published at https://www.atlantbh.com on February 16, 2022.

--

--

Atlantbh
Atlantbh Engineering

Tech-tips and lessons learned about Software Engineering