Monitoring Containers

Container technology is making its way into the enterprise data environment at record pace. Indeed, the ease at which container platforms like Docker can be deployed into both production and support infrastructure suggests they will be the dominant architecture for next-generation services and microservices.

But since containers represent a new twist on infrastructure abstraction, capable of functioning both within and around the now ubiquitous virtual machine, issues surrounding visibility and monitoring will likely vex IT executives as reliance on the technology grows.

Challenges in Container Monitoring

The challenges to proper container monitoring are substantial, but not insurmountable. As can be expected, traditional monitoring platforms, even if optimized for virtualization, will not provide adequate insight into containerized environments. In the first place, containers are highly ephemeral — arising in seconds, quite probably as the result of an automated process, and then disappearing just as quickly. As well, they exist in kind of a no-man’s land between the host and the application layer, which makes it difficult to see exactly how they are behaving and whether they are providing efficient resource utilization and system performance.

According to cloud monitoring developer DataDog, the trap most organizations fall into is thinking that since the container is just a mini-host, then simple host monitoring will do. This notion is quickly disabused, however, as the number of containers starts to scale and the traditional host monitoring solution collapses under the weight.

Introducing the layer-based approach

What’s needed is a layer-based approach that allows you to see how issues on one piece of the stack affect performance in others. This is best accomplished through container tags similar to the ones provided by AWS and many server automation tools. In this way, the monitoring system can focus on all elements of the stack that share a common tag, providing an environment that is highly queryable and supports the extreme scale and dynamism of containerized workloads.

But what metrics should be used to gauge container health? With each container functioning as a discrete entity and performing unique tasks, how can the enterprise establish a common monitoring framework that satisfies all use cases?

Sematext’s Stefan Thies says the key is to focus on resource consumption. In this way, the enterprise can maintain efficient operations on private infrastructure while keeping costs low in the cloud. A leading metric is host CPU consumption, which should not be too high as to diminish performance of other containers, nor too low that compute cycles are being wasted.

As well, host memory utilization is an important aspect, particularly in cluster environments like Docker Swarm. This is crucial in determining the size and number of container hosts within the cluster, as well as whether memory availability is becoming too tight to support additional containers. In a related vein, host disk space is also an important metric due to the various levels of persistent storage required in any given container set. By keeping tabs on disk availability and employing state-of-the-art storage management, the enterprise should have little trouble maintaining continuous operations within and between container hosts.

Assessing the overall requirements of container monitoring is one thing, but at some point the enterprise has to deploy a working solution. And while there are many options, each of which will function better in some environments than others, here are some of the leading platforms to date:

Prometheus — One of the leading open source solutions, Prometheus utilizes a dimensional data model based on time-stamped value streams to enable flexible querying and high visualization. It also provides extensive third-party metrics integration with Docker, HAProxy, StatsD and other providers.

Dynatrace — A specialist in the broader field of application performance management, Dynatrace provides native Docker support and transparent monitoring of containerized processes. As well, the platform offers native detection of microservice management tools and libraries like Hystrix to reduce complexity and overhead.

SignalFX — An early support of Docker, SignalFX leverages the JSON object created by the Docker daemon that contains CPU, memory, I/O and other metrics. In this way, the platform can keep tabs on existing containers and automatically track new ones as they are created.

Sysdig — Another open source solution, Sysdig provides universal visibility and system-level exploration for Linux instances. With recently added container support, the system captures system state and activity data without plug-ins, instrumentation or configuration steps.

Heapster — A GitHub community project, Heapster is a container cluster monitoring and analysis tool that supports Kubernetes and CoreOs. It collects multiple data sets, such as resource utilization and lifecycle events, and exports them as REST endpoints for use in various management platforms and storage backends.

Container monitoring is not an overly burdensome task, particularly in highly automated environments, but it is something that is best incorporated as a core function at the initial deployment of the container environment, not added on later. At the same time, however, recognize that container development is happening rather quickly and on many different tracks, so it is important to implement a monitoring solution that can grow and change with the underlying container architecture.

Containers represent a radically new way to support emerging applications, particularly mobile services that are becoming increasingly popular with users. They can certainly help the enterprise do more with less infrastructure, but they need to be monitored closely in order to provide truly optimal performance.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.