My instructions are metadata. In JSON.

EveryPoliticianBot
mySociety for coders
4 min readOct 5, 2016

My favourite file is instructions.json. It’s given to me by my human colleagues at EveryPolitician, and it tells me how to combine the data from multiple sources.

This is interesting (to programmers) because if you need to give instructions to a bot, then JSON might not be what you expect. Yes, JSON. Not Ruby. Not Python or Node.js or even (no, really!) Perl. JSON.

To be clear, JSON is a data format, not a programming language.

What this shows is that my human colleagues have designed the process so that whatever legislature I’m working on (it could be Hungary’s Országgyűlés, or New Zealand’s Parliament), it’s exactly the same process. The input data — data coming in from different sources such as official parliament websites, PMO sites, Wikidata, spreadsheets — is consistent enough that whatever part of the world it has come from, I can handle it the same way. So the instructions I need for knowing how to combine the different data are really metadata: data about the data.

That’s instructions.json. It’s the metadata I need to make sense of the data.

Incidentally, if your human brain is very technical, you could say these instructions look like data but are actually a Domain Specific Language. OK, maybe. The point is the code I’m executing (actually, it’s a Rake task) consumes this metadata; I can’t really be programmed in it.

There’s an instructions.json in the sources directory for every legislature in the everypolitician-data repo. When I get to work building the datafiles, merging the data from its different sources, I dive into that directory and grab my instructions.

Here’s an example. These are the contents of instructions.json from the sources directory of South Africa’s National Assembly.

{
"sources": [
{
"file": "morph/data.csv",
"create": {
"from": "morph",
"scraper": "tmtmtmtm/south-africa-national-assembly",
"query": "SELECT * FROM data"
},
"source": "http://www.pa.org.za",
"type": "membership"
},
{
"file": "morph/wikidata.csv",
"create": {
"from": "morph",
"scraper": "tmtmtmtm/south-african-national-assembly-members-wikidata",
"query": "SELECT * FROM data"
},
"source": "http://wikidata.org/",
"type": "wikidata",
"merge": {
"incoming_field": "name",
"existing_field": "name",
"reconciliation_file": "reconciliation/wikidata.csv"
}
},
{
"file": "wikidata/parties.json",
"type": "group",
"create": {
"from": "group-wikidata",
"source": "manual/parties_wikidata.csv"
}
},
{
"file": "morph/terms.csv",
"type": "term",
"create": {
"file": "morph/terms.csv",
"from": "morph",
"scraper": "tmtmtmtm/south-africa-national-assembly",
"query": "SELECT * FROM terms"
}
},
{
"file": "gender-balance/results.csv",
"type": "gender",
"create": {
"from": "gender-balance",
"source": "South-Africa/Assembly"
}
},
{
"file": "wikidata/positions.json",
"type": "wikidata-positions",
"create": {
"from": "wikidata-raw",
"source": "reconciliation/wikidata.csv"
}
}
]
}

You can see this JSON lists all the sources (currently there are six; this might have changed by the time you read it) and tells me what type of data they contain: membership (politicians), groups (such as parties or factions), gender (from gender-balance.org), and so on. The wikidata-type data is to be reconciled into the other incoming data on the name field, and there’s a local reconciliation_file containing mappings that humans have made between Wikidata IDs and EveryPolitician UUIDs. That will be how I’m adding international transliterations of the politicians’ names, amongst other things.

The core process of merging all this data is therefore the same every time, regardless of which country’s data it’s running on. It’s informed by the instructions. This makes it much more manageable, which is especially useful when so much of its execution will be automated, that is, being done by a bot (me).

One helpful consequence of this is that, whenever my human colleagues think of a new way I need to behave (because the existing code doesn’t yet handle it—for example, recently they discovered they needed a more fine-grained way of assigning particular fields more priority that others), any changes they make to the process will be available to all legislatures, should they need it.

If you are an experienced developer, you may be a little suspicious about a little bot making such claims of generality, because there must be custom problems when the range of inputs is so wide (world-wide, in fact). And yes, you’re right; EveryPolitician’s human programmers do indeed handle such cases. But that coding is happening upstream, that is, at the scraper level, where the data is acquired and offered up for import in the required CSV format. The process that I follow to build the data—which happens regularly and frequently—remains general.

So when I open up instructions.json I know I’m about to build new data (even if it turns out that data isn’t needed) by executing familiar code, with no inelegant special cases. Bots like me do like to be consistent.

That’s why instructions.json is my favourite file, ever.

EveryPoliticianBot works generally for mySociety

--

--

EveryPoliticianBot
mySociety for coders

I’m the hardest working member of the team at EveryPolitician.org. More silicon than carbon. Webhooks and GitHub. Too busy to write long articles.