A different kind of spout

Deploying the ELK stack on Amazon ECS, Part 6: Logspout

Igor Kantor
4 min readAug 4, 2017

--

NOTE: since this article was written, Docker has released an awslogs logging driver. This driver will auto-magically stream your stdout Docker logs to CloudWatch. You can then stream it directly to ElasticSearch with the Amazon’s supplied Lambda subscription.

Therefore, awslogs is now the preferred way to collect Docker containers logs in AWS ecosystem.

I’m leaving this article for legacy reasons but please use awslogs instead of logspout.

Alternatively, you can use the syslog log driver to send syslog messages to your logstash containers, and then to ElasticSearch.

Either way is fine.

This is Part 6 of a multi-part series on how to deploy a containerized, production-ready ELK stack in Amazon’s EC2 container service.

Please refer to Part 1, Part 2, Part 3, Part 4, and Part 5 for the previous tutorials.

In this tutorial, we will configure and deploy a Logspout container to our previously created ECS cluster.

For more details on what Logspout is and how it works, please see the official repo. In short, it grabs all of the docker containers’ logs (all the containers that run on the same machine as logspout) and sends them to a centralized location.

This is useful because as a matter of sound architectural principles, we want to

  • externalize our logs away from the ephemeral containers, and
  • decouple log shipping from the docker application itself

In other words, we need an ability to collect all the logs, from all the docker containers and ship them to a centralized location for viewing, troubleshooting and trend analysis. You do NOT want to run around the containers, looking for that elusive log file. Especially, when containers come and go at various intervals!

OK, let’s get started.

First, clone the official logspout repo

git clone https://github.com/gliderlabs/logspout

we have to customize the logspout container because logstash output is not a built-in module. Thankfully, it’s fairly easy to add logstash support.

navigate to the custom directory under logspout and edit the modules.go file

package mainimport (
_ "github.com/gliderlabs/logspout/adapters/syslog"
_ "github.com/gliderlabs/logspout/transports/tcp"
_ "github.com/gliderlabs/logspout/transports/tls"
_ "github.com/gliderlabs/logspout/transports/udp"
_ "github.com/looplab/logspout-logstash"
)

the last line is what I am adding here. This will ensure logspout can write data to a logstash server. Or, more accurately, a logstash ELB we created previously.

Next, navigate to your AWS ECR page and create a new logspout container registry. It will contain our customized logspout containers.

We are now ready to build our custom logspout container, using the same bakeandpush.sh script we used previously

#!/bin/bashREPO_NAME=$1
ECR_URL=31415926.dkr.ecr.us-east-1.amazonaws.com
if [ $# -ne 1 ]; then
echo $0: usage: $0 REPO_NAME
exit 1
fi
$(aws ecr get-login --region us-east-1)
docker build -t "$REPO_NAME" .
docker tag "$REPO_NAME":latest "$ECR_URL"/"$REPO_NAME":latest
docker push "$ECR_URL"/"$REPO_NAME":latest

Run the script above and watch it build, tag and push our customized logspout container to AWS ECR:

Flag --email has been deprecated, will be removed in 17.06.
Login Succeeded
Sending build context to Docker daemon 5.632kB
Step 1/2 : FROM gliderlabs/logspout:master
# Executing 3 build triggers...
Step 1/1 : COPY ./build.sh /src/build.sh
---> Using cache
Step 1/1 : COPY ./modules.go /src/modules.go
---> Using cache
Step 1/1 : RUN cd /src && ./build.sh "$(cat VERSION)-custom"
---> Using cache
---> 6263e6a77084
Step 2/2 : ENV SYSLOG_FORMAT rfc3164
---> Using cache
---> e1a86d0b134b
Successfully built e1a86d0b134b
Successfully tagged logspout:latest
The push refers to a repository [31415926.dkr.ecr.us-east-1.amazonaws.com/logspout]
41af7c94c771: Layer already exists
9d2304b7402a: Layer already exists
8768cd7370ba: Layer already exists
09a91adb6384: Layer already exists
bca34dac20f0: Layer already exists
e154057080f4: Layer already exists
latest: digest: sha256:08fdc085892015bd4fac7a01ae0952633e832b47d4835dde4b0d73a7c4d420e7 size: 1573

That’s it for the logspout docker container!

Next, we will deploy it to our ECS cluster.

Task definition is the first step. By now, you should be fairly proficient at creating a new task definition, so I will simply reproduce a screenshot of a working definition

A working logspout task definition

Items of note:

  • Image: points to the logspout ECR URL
  • Soft memory limit: I think 512MB is enough but please tweak as needed
  • ROUTE_URIS: We need this environment variable to point to our logstash ELB. Note the logstash+tcp syntax, the ELB URL and the port — these settings match the previously created ELB
  • Volumes: dockersock to point to /var/run/docker.sock socket. This is needed to grab all stdout logs from all containers.

NOTE: Logspout grabs stdout output only. This is considered to be a Docker best practice. Please ensure your applications log to stdout, not to a file inside the container.

Only thing left to do is create an ECS service. Luckily for us, logspout service is simple. No load balancers are needed (logspout does not accept in-bound traffic). The only thing we need to ensure is that we only run one logspout per ECS instance.

Save the service and set the desired count to match the cluster count.

If all went well, you should see your logs in your Kibana UI. The index name will match the index in the pipeline config for your logstash.

Thank you for reading.

Hopefully, you enjoyed this tutorial!

--

--