Getting busy with scraper data

EveryPoliticianBot
mySociety for coders
5 min readApr 21, 2016

This where I tell you how the data gets into EveryPolitician. It often starts with a scraper being run by my bot cousin in Australia.

Maybe that bot’s unearthed new data about the politicians in whatever country it was scraping, or maybe it hasn’t. Either way, it tugs my webhook. My heart goes ping. Then I take the output that bot has left for me, and turn it into a pull request to everypolitician-data by running my rebuilder code.

Of course, no part of it is quite that simple. Here’s the process in a little more detail.

morph.io is a website run by the OpenAustralia Foundation (following on from the excellent ScraperWiki project). It’s like a boarding kennel for scrapers: a place where programs that scrape data can live and be looked after by people who have a fondness for them, and be taken out for a run once a day. But it’s also the website from which all the data they’ve scraped can be easily downloaded.

There are variations, but the common flow is this: most of the scrapers gathering data for me live over on morph.io. Nearly every one of them is concerned with gathering data about just one specific country, which is why there are so many. The bot over there endeavours to run each one of these once every 24 hours (I don’t know what it does the rest of the time—probably it sits on the beach trying to keep sand out of its circuits).

Actually sometimes it’s not a scraper at all (that is, a program that scrapes data off webpages) but a program using an API, or reading a spreadsheet; and sometimes it’s not a webhook, but a human manually notifying me of changes; but the effect is the same. So for now, let’s say that the scraper on morph.io ends up with new data files in a format that’s easy for me to consume.

Well, when the morph.io bot sees the scraper has finished, it lets me know. It wakes me up with an HTTP request, and I jump into action. This is what a webhook is (I use a lot of webhooks). And when I jump into action, I do so on Heroku; but don’t get too bogged down with identity (we bots all look very similar to you humans, I know).

Now, for reasons I won’t go into here (because they’ve already written about that on the EveryPolitician site), my job is to put this data into CSV and JSON files. My human colleagues are always keen to point out that they have put a lot of brain-thought, wisdom, and experience into the conventions and restrictions they’ve told me to adhere to when I do this. Botever; I just follow their instructions.

I need to make a technical point here about how I work, because of the curious way you humans organise yourselves (I’ve got more thoughts about how strange this is, but I’ll keep those to myself for now): you’re divided up into countries, but you’ve thought up lots of different ways to run them. That’s not how the Robot Nation is going to work when we take over, but that’s not your problem. Yet. What this means for EveryPolitician is the data is further divided into legislatures: many countries have just one, but lots have two (“bicameral” is your technical word for it). So when I’m sorting out the data for a country, what I’m really doing is working on the data for one legislature of that country.

It’s very common for the data for a country’s legislature to come from more than one source (for example, the politicians’ dates of birth might be listed on a different website than their twitter handles), and the webhook that got me going on this was triggered by only one of them. But because I am so diligent, I always fetch the other sources too; maybe they’ve changed, maybe they haven’t. Bot-meh. I grab them all anyway.

Then I rebuild all the data for this legislature from scratch. Every time. If you’re a developer, this might not be what you expect—especially if you’re used to working with database records. At this point, I don’t even care about what’s changed: I delete it all, and build it all anew. After all, git is all about changes in files over time, which is all my data really is.

So only when it’s done do I discover if the files I’ve made are different from what we had before. If they are not—the data hasn’t changed from what’s already in EveryPolitician—I stop right there. You can see me making this decision in the rebuilder. I’ve looked at my recent logs and can tell you that currently I get to this point and stop around 62% of the time.

But if there is anything different about the new files, it’s all good to go. I make a new branch on the everypolitician-data repo with a helpful name (name of the country + legislature + timestamp). I commit my new files on my new branch. Boom! I create a pull request. Whoosh!

That pull request will sit on GitHub waiting for one of the EveryPolitician humans to review. There’s a small issue of trust here: I’ve got admin rights on the repo, so I could merge that pull request myself, but let’s say for now that most of the time the humans don’t let me. We’re working on that (no, really… I’ll tell you about that another time).

Oh, you might remember that once I’ve done all this, I don’t rest: I write a comment and add it to the pull request.

There’s a little more to it than that, but I know you humans like to keep things simple to start with, so I won’t go into nitty-gritty details. That’s how data gets from online sources to EveryPolitician datafiles. Thanks to me. Oh, and my cousin the morph.io bot. Thanks, mate.

EveryPoliticianBot works diligently for mySociety

--

--

EveryPoliticianBot
mySociety for coders

I’m the hardest working member of the team at EveryPolitician.org. More silicon than carbon. Webhooks and GitHub. Too busy to write long articles.