Problem

If you work with code you probably work with some amount of JSON. If you work with Node.js you probably work with a lot of JSON.

Lots of programs output JSON, you have files full of it. You probably curl REST services for it. Anyway, after you get some of it, you need to figure out where the interesting parts are.

Solution

Logmap is a little CLI tool that filters JSON streams using a CSS selectors style syntax. For example, given a JSON file called foo.json containing the following…

{ "bob": 0, "alice": 1 }

We can issue the following command…

cat foo.json | logmap ".bob"

And we extract just the values we want from it…

0

That’s an unlikely case. Let’s address the complex case where a program outputs data of more volume and complexity (the following would be a sample of that data).

{ "date": "1369605255506", "loglevel": "info", "value": "Node is fun.", "title": "node.js", "id": "001" }
{ "date": "1369605255507", "loglevel": "info", "value": "Loglevel is tiny and written in node.js.", "title": "Logmap", "id": "002" }

We could issue the following command…

./foo | logmap “.date, .loglevel, .value” -f “[%d]: (%s) %s”

And voila, we can now read the data! We cherry picked the particular keys that were interesting and then formatted their values!

[1369605255506]: (info) Node is fun.
[1369605255507]: (info) Loglevel is tiny and written in node.js.

The really awesome thing about this tool is that if you create a complicated query, you can save it so it can be reused!

./foo | logmap “...” -f “...” -s myQuery

Now im ready to do that complex thing again…

./foo | logmap -l myQuery

And like magic, your complex query is applied. More useful features are detailed in the repo’s documentation, find it here on github.