CoreOS logging to remote destinations

Push systemd journal logs to remote log destinations


When using CoreOS with multiple Docker containers on Google Compute Engine I was having a hard time finding a way to log from CoreOS to remote destinations. Especially since CoreOS doesn’t come with a package manager, which means no ‘build-essentials’ and thus no building from source files either.

I have ended up by using something like this:

journalctl -o short -f | ncat 12345

There are several service providers out there. I am pretty sure there are others, and I happily will add more:

Choose yours. I am using at the moment. The following guide is written for usage with, but can easily be adapted for other providers as well.

Remote Destination:

Since IP addresses changes for my GCE instances, I had to use a feature calledToken-based input over a single TCP connection. You basically put their token somewhere in any log message and they will filter it out automatically on their side again. Using awk to prepend the token:

$ journalctl -o short -f | awk '{ print "<token-here>", $0; fflush(); }' | ncat 10000

Get your token from and save it as metafield (i.e. using the web interface). You could save it as common metadata (see image below) or as custom metadata for one specific instance only. Using my script below, custom metadata will override common metadata. Add a field called ‘logentries-token’.

common metadata web interface view

Initialize GCE instances with Cloud-Config

CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. By using GCEs user-data metafield we are able to create and initialize a new GCE instance or update a running instance with cloud-config (changes will become active after the next reboot). The user-data metafield is stored for every instance separately as custom metadata. It would be nice if GCE checks the common user-data metafield as well, so newly created instances in a GCE project would automatically be provisioned with the cloud-config, but it seems as if this does not happen.

Let’s use cloud-config to start pushing logs to a remote destination on instance startup.

For more information about GCE and cloud-config, please read

When referring to cloud-config.yaml in the next steps, I mean the file from So please download it first and save it somewhere on your disk. The cloud-config.yaml is currently tailored for but can easily be modified to match with other providers as well.

Create a new GCE instance

# cd to/directory/where/cloud-config.yaml/is/saved
$ gcutil --project=<project-id> addinstance \
--image=coreos-v286-0-0 \
--persistent_boot_disk \
--zone=us-central1-a \
--machine_type=n1-standard-1 \
--metadata_from_file=user-data:cloud-config.yaml \

(see also

Update an existing GCE instance

# cd to/directory/where/cloud-config.yaml/is/saved
$ gcutil --project=<project> getinstance <instance>
# look for 'metadata-fingerprint'
$ gcutil --project=<project> setinstancemetadata <instance> \
--metadata_from_file=user-data:cloud-config.yaml \

(see also

It is currently impossible to update the user-data metafield in the web interface, as it doesn’t provide multi-line input. I tried adding \n to the end of every line, but it gets escaped to \\n thus leading to an “Unrecognized user-data header” error. See for more information.

Troubleshooting in case something doesn’t work:

$ systemctl status logentries
$ sudo systemctl daemon-reload
$ sudo systemctl restart logentries

I am not using this in production yet. But the CPU load just looks fine. Sending 1000 log lines per minute didn’t show any impact on CPU resources. Running a 2 vCPU, 1.8 GB memory GCE machine (n1-highcpu-2).

This article doesn’t cover logging from running docker containers. I am looking forward to seeing what solutions will be provided for this in the near future.!topic/docker-dev/3paGTWD6xyw is a good read on this topic. At the moment, I mount /dev/log into a docker container.

Edit: I just found out, that is using the same approach. journalctl -f | …