A model for Informed Sharing: here’s one we made a bit earlier
Long before Facebook thought ‘informed sharing’ might help soothe the beast called Fake News — aka lies and propaganda — a bunch of eager dreamers had a brainwave they named milkpilot.
milkpilot is both a website (currently ‘resting’ in beta stage) and the home of the ratings system, Dynamic Credibility.
It aimed to answer the question: in a world awash with content, how do we know what is good? That was almost three years ago. If anything, the question is even more urgent now.
The central idea was to encourage readers to rate articles and blogs against five criteria — five key words — and in doing so, create a credibility rating for both the piece and its author.
We argued that readers deserved a more sophisticated choice than simply to ‘like’ something. In clicking ‘like’, what are Facebook users actually liking — and why?
So we came up with our own words: timeliness, enjoyment, transparency, originality and impact.
The words we picked seemed a pretty fair and reasonable way of assessing the credibility of any one piece of journalism and any one author.
Over time, as more people rated and shared an article, the DC rating on the piece would change — as would the score of the contributor. Hence the word dynamic.
The higher a contributor’s rating, the greater their influence over milkpilot articles and authors. We were keen to turn raters into contributors and vice versa.
This is what the site looked like at the front end.
And here is a sneak peek of the back end, the engine room of DC. (This image doesn’t entirely do DC justice, as nothing much has happened in milkpilot for the past two years or so.)
As time passes, and there is less activity on the story and by the author, DC flattens out. But you get the drift: individual users grow a credibility score.
(In case it isn’t clear by now, I happened to be one of those dreamers along with Simon Meers (algo), Anthony Ditton (UX) and Hunter Page (bizdev), who you might remember from this previous piece on Medium.)
We four thought milkpilot had plenty of promise, so we managed to convince a journalism school in Sydney to trial the idea with students.
That happened over several weeks in mid-2014. Thanks to Stephen Davis for helping us make that happen.
We incentivized the students with the promise of professional editing and iTunes cards for the most rated.
Under their own names, we wanted students to write their own pieces and rate others.
This delivered our first glitch. Many students preferred aliases over real names, even though, as we explained, transparency was one of our key words.
There were other mis-readings.
We thought students would be interested in rating their peers. All too late did we realize that peer review, especially at such a granular and persistent level, might not prove an attractive option to someone sitting two desks away from the person they were rating.
Peer review is very much in vogue in universities. But I suspect milkpilot just asked for too much: too much thinking, too much doing and too much revelation.
Our students were keen on sharing (anonymously), some were up for writing — and not many up for rating.
That said, some of the students were really into it and did some great work. ‘Baby Beatnik’, ‘TCarley’ and ‘Necessary on a Bicycle’ were productive writers and raters.
After a couple of months, the trial petered out, milkpilot had a pivot or two and the dreamers had to go back to the real world of jobs and kids.
So, here we are now. Up pops Facebook and the idea of informed sharing and you think: we were ahead of our time and gave up too easily.
Partly, the milkpilot version of informed sharing morphed into Clevr, which is now a B2B author-driven personalisation engine. Clevr is in beta stage.
But we keep renewing the provisional patent over milkpilot and can still see its promise for it as a way of improving other rating concepts — Amazon’s product rating system, for instance. Possibly even Facebook’s.
I guess we will find out precisely how Adam Mosseri, Facebook’s VP of product management, intends to make stories “authentic and meaningful” on the platform.
As a recovering editor and fact-checker, I’d be tempted to say: employ editors and fact-checkers.
As a milkpilot pioneer, I’d say, the challenge before you, Adam, is to ensure people read all the way through before they take part in the act of informed sharing.
How do you get people to do that?
We had two ideas: one, that some readers, as a way of showing their own engagement (read, ‘smarts’), would be only too happy to read and rate articles against our criteria; two, that we could use DC to create leagues tables of the most shared, rated and read.
This would be another way of recognizing high-rating contributors.
In our more colorful moments, we envisaged that people at the top of that table — say, the top five or ten — would gain a share of the revenues we generated. We also saw milkpilot as a Chrome extension.
Back in 2014 we showed the idea to an influential and empathetic news executive who, though full of praise for the idea, argued that most readers wouldn’t be interested in spending time rating journalism.
He might still be right. But times have changed. Journalism needs all the help it can get.
In these days of lies and propaganda, we will certainly need to rely on something more than a notion of public good to rebalance public debate in favor of truth.
milkpilot may yet fly again.
ends