Logging to AWS Elasticsearch Service from Kubernetes

William Broach
2 min readJun 9, 2017

--

In this guide we’re going to setup Amazon’s Elasticsearch service and forward logs from our Kubernetes cluster to it.

We will be using fluentd with the “aws_elasticsearch_plugin” to accomplish this.

Step 1: Create an IAM user called: “elasticsearch” (chose: AWS Programatic Access) and download the credentials.

Step 2: Create an Amazon Elasticsearch Service instance (Dashboard: Services->Analytics->Elasticsearch Service). For this example were only going to use a single instance. Feel free to choose whatever fits you best.

Note: if your HTTP payloads will be larger than 10mb than the smallest instance size you can use is: m3.xlarge

Step 3: Modify the Policy of the instance to allow both the “elasticsearch” user as well as all the NAT gateways of your Kubernetes cluster (this is assuming private-topology) and also any IP address that you wish to access the Kibana dashboard from.

Example policy below:

Step 4: Modify the fluentd-daemonset.yml below to add the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for the elasticsearch user you made in Step 1. and a few other variables (AWS_REGION, AWS_ELASTICSEARCH_URL (This is going to be the endpoint thats generated once the instance is made).

Step 5: Launch the DaemonSet into kubernetes:

kubectl create -f fluentd-aws-es-daemonset.yml

Logs should now be flowing from all pods into Amazon Elasticsearch Service.

You can browser to your Kibana endpoint url and take a look: search-<name>-xxxxxx.us-east-1.es.amazonaws.com/_plugin/kibana/

Thanks to https://github.com/cheungpat for providing the docker image containing fluentd and the aws elasticsearch plugin:

--

--

William Broach

DevOps Janitor | Recovering SysAdmin | Kubernetes | Docker | Distributed Computing | (@while1eq1)