Apex log

TJ Holowaychuk
5 min readJan 14, 2016

--

Structured & centralized logging for Go.

If you know me you know I’ve written at least half a dozen logging packages, and although they arguably overdone, I still think logging a critical piece of infrastructure. Chances are if you’re a software engineer, you’ll be staring at logs for half of your life, so it’s worth taking the time to discover the “perfect” logging solution.

Why structured?

The apex/log package is a structured logging package Go, inspired by Logrus, one of Go’s most popular structured solutions.

I strongly believe that structured logging is the way to go for most, if not all applications. If you’re not familiar with structured logging, the basic idea is that you have discrete fields which can be used later for querying and subsequent processing. You can search logs by a user id, the name of a file they uploaded, errors, response times, and so on. Not only is this great for logging but it can be used to produce metrics, or even to derive billing information.

Here’s what you might see with plain-text:

log.Infof(“uploading %q for %s”, filename, user.ID)

Here’s a structured alternative, each field is captured on its own. Note that the log message itself does not contain dynamic content, this is important, as it is the unit for filtering, aggregation, and so on. If the message were “upload %s” you would have a difficult time differentiating it from a message like “failed to upload %s”, so make sure these are machine friendly!

log.Info(log.Fields{
“file”: filename,
“user”: user.ID,
}).Info(“upload”)

Structured logging is also great for contextual logging, it becomes trivial to inherit fields, without just prefixing a large string. Here’s an example of this, the log.WithFields() call returns an Entry which has the same methods as the logger itself, thus they are chain-able.

Features

Apex/log supports the same basic interface and features you’ll be familiar with from Logrus. I’ve made some breaking changes, extended the API, simplified handlers, and provide some built-in handlers for average use-cases.

  • WithField(key, value) for single fields
  • WithFields(fields) for multiple fields
  • Pluggable log handlers
  • Package singleton for convenience

Some improvements have also been made:

  • Simplified handlers, no formatter/hook distinction
  • log.Interface shares the Logger and Entry method set
  • Built-in handlers (text, json, cli, discard, logfmt, memory, kinesis, multi)
  • Tracing support

Handlers

A handler is the mechanism which processes log entries, without one apex/log isn’t very useful. For example you might log JSON to stdout, send logs to a syslog server, or log in a human-friendly format to stdout and Kinesis for further processing. Handlers have the following interface:

type Handler interface {
HandleLog(*Entry) error
}

Here’s what a “multi” handler might look like, allowing you to register multiple handlers.

Built-in handlers

Out of the box the following handlers are provided:

  • discard: no-op, useful for silencing
  • text: human-friendly colored development output w/second delta
  • cli: human-friendly colored command-line output
  • json: outputs JSON to the given writer
  • multi: registers multiple handlers
  • memory: in-memory log store, useful for test assertions
  • logfmt: outputs logfmt to the given writer
  • kinesis: writes logs to AWS Kinesis as JSON

Tracing

It’s not uncommon for a program to log some output before, and after some work. If you’re logging errors as well that’s three log calls you just added, if you’d like to track the execution time, that’s another line or two to produce the metrics. With apex/log you can simply call Trace(msg) and defer Stop(*error) to handle all four cases in a single line.

Here’s what the text output looks like:

Here’s the output when an error occurs:

Here’s the cli handler, which uses colors to indicate severity and ditches the time delta:

Centralization

As I’ve mentioned, I think structured data and centralization are vital to reasonably large production systems. I think this will be increasingly true as AWS Lambda, Docker, and other similar solutions become more widely adopted.

The problem

The simplest case is not so bad, you have an application and it sends its logs to some service:

Single application, single service

Add a few more services and configure them, it’s still not too bad right?

Single application, multiple services

But wait! Now we want to introduce a few new services to collect metrics and logs. You could wrap all three into a library so that each service can use it, but what if you’re using multiple languages? You’ll also have to re-deploy each when there’s a change.

Multiple applications, multiple services

The solution

The solution is simple, there’s nothing new here, just plunk an intermediary into the picture and you have centralized logging. Now you can add, remove, re-configure, filter, analyze, store, alert, or aggregate without making the application (or your developers) do a single thing.

Multiple applications, multiple services centralized

Apex/log was designed for this kind of centralization, so what does this mean? Logrus produces JSON that looks something like the following, collapsing fields to the root level of the JSON object:

{
"time": ...,
"msg": ...,
"level": ...,
<field>: ...
}

This might not sound like a big deal, but if you want to serialize, ship, and deserialize JSON logs with the same schema, then this sort of normalization is not very Go-friendly. To produce the same *logrus.Entry on the consumer-side you’d have to unmarshal to a map, assign the three primary fields, loop the rest and assign those back to the entry. It’s not a huge problem, but it’s not ideal.

Apex/log simply marshals and unmarshals the Entry as-is, making it much nicer to work with over the wire. This simple change means you can use the same handlers seamlessly on the producer (app) side, or consumer (sink).

I plan to release an AWS Lambda based collector in the near-future which will let you fan out to these services, as well as derive metrics from logs for Stathat, DataDog and similar.

That’s it for now, stay tuned!

--

--