Microservice Architecture: Sidecar Pattern
In microservice architecture, it’s very common to have multiple services/apps often require common functionalities like logging, configuration, monitoring & networking services. These functionalities can be implemented and run as a separate service within the same container or in a separate container.
Implementing Core logic and supporting functionality within the same application:
When they are implemented in the same application, they are tightly linked & run within the same process by making efficient use of the shared resources. In this case, these components are not well segregated and they are interdependent which may lead to failure in one component can in-turn impacts another component or the entire application.
Implementing Core logic and supporting functionality in a separate application:
When the application is segregated into services, each service can be developed with different languages and technologies best suited for the required functionality. In this case, each service has its own dependencies \libraries to access underlying platform and the shared resources with the primary application. It also adds latency to the application when we deploy two applications on different hosts and add complexity in terms of hosting, deployment and management.
Sidecar Pattern (or) Sidekick Pattern?
Sidecar concept in Kubernetes is getting popular and .Its a common principle in container world, that container should address a single concern & it should do it well. The Sidecar pattern achieves this principle by decoupling the core business logic from additional tasks that extends the original functionality.
Sidecar pattern is a single node pattern made up of two containers.
The first is the application container which contains the core logic of the application (primary application). Without this container, application wouldn’t exist.
In addition, there is a Sidecar container used to extend/enhance the functionalities of the primary application by running another container in parallel on the same container group (Pod). Since sidecar runs on the same Pod as the main application container it shares the resources — filesystem, disk, network etc.,
It also allows the the deployment of components (implemented with different technologies) of the same application into a separate, isolated & encapsulated containers. It proves extremely useful when there is an advantage to share the common components across the microservice architecture (eg: logging, monitoring, configuration properties etc..)
What is a Pod?
Pod is a basic atomic unit for deployment in Kubernetes (K8S).
In K8S, a pod is a group of one or more containers with shared storage and network. Sidecar acts as a utility container in a pod and its loosely coupled to the main application container. Pod’s can be considered as Consumer group (in Kafka terms) which runs multiple containers.
When Sidecar pattern is useful ?
- When the services/components are implemented with multiple languages or technologies.
- A service/component must be co-located on the same container group (pod) or host where primary application is running.
- A service/component is owned by remote team or different organization.
- A service which can be independently updated without the dependency of the primary application but share the same lifecycle as primary application.
- If we need control over resource limits for a component or service.
Examples:
1. Adding HTTPS to a Legacy Service
2. Dynamic Configuration with Sidecars
3. Log Aggregator with Sidecar
Adding HTTPS to a Legacy Service
Consider a legacy web service which services requests over unencrypted HTTP. We have a requirement to enhance the same legacy system to service requests with HTTPS in future.
The legacy app is configured to serve request exclusively on localhost, which means that only services that share the local network with the server able to access legacy application. In addition to the main container (legacy app) we can add Nginx Sidecar container which runs in the same network namespace as the main container so that it can access the service running on localhost.
At the same time Nginx terminate HTTPS traffic on the external IP address of the pod and delegate that traffic to the legacy application.
Dynamic Configuration with Sidecars
When the legacy app starts, it loads its configuration from the filesystem.
When configuration manager starts, it examines the differences between the configuration stored on the local file system and the configuration stored on the cloud. If there are differences, then the configuration manager downloads the new configuration to the local filesystem & notify legacy app to re-configure itself with the new configuration (Eg: can be EDD or Orchestration mechanism to pick new config changes)
Log Aggregator with Sidecar
Consider we have a web server which is generating access/error logs which is not soo critical to be persisted on the volume beyond the specific time interval/memory space. However, access/error logs helps to debug the application for errors/bugs.
As per separation of concerns principle, we can implement the Sidecar pattern by deploying a separate container to capture and transfer the access/error logs from the web server to log aggregator.
Web server performs its task well to serve client requests & Sidecar container handle access/error logs. Since containers are running on the same pod, we can use a shared volume to read/write logs.