Deployment of full-scale ELK stack to Kubernetes

Vladimir Fedak
HackerNoon.com
2 min readMar 12, 2018

--

Elasticsearch, Logstash and Kibana, known as ELK stack or Elastic stack are the tools of the trade for logs aggregation and analysis. As these devops services are amongst the most oftenly requested, we automated their deployment with our tool available on Github.

Approximate scheme of ELK:

These manifests DO NOT include the Filebeat installation! Refer to the official Filebeat configuration documentation.

Configuring a new ELK installation

This installation suits Kubernetes on AWS deployment. Namespace `elasticsearch` is used by-default. Elasticsearch comes with 2 endpoints: external and internal. Use both or drop the unnecessary one.

1. Clone the https://github.com/ITSvitCo/aws-k8s repository

2. Create StorageClass in order to allow Kubernetes provision AWS EBS volumes.

3. Launch HA Elasticsearch cluster. There will be 2 Elasticsearch masters, 2 Elasticsearch clients, 3 Elasticsearch data nodes.

Customizing Logstash

1. If you need to store data in various indices, you should create a new manifest for Logstash. E.g. make a copy from existing manifest logstash-application.yaml

2. Set required index name in the output section:

where new_index is the required index name
3. Run this command to deploy a new Logstash:

Summary

We successfully use this devops solution as a part of data analysis and processing system. Here is an example of a running solution:

This is yet another neat module from a collection of custom-tailored IT Svit DevOps tools, which ensures quick and simple deployment of a full-cycle ELK stack to Kubernetes.

This story was originally published on my company’s blog — https://itsvit.com/blog/deployment-elk-stack-kubernetes-single-command/

--

--