Think of it as Universal Declaration of Human Rights…for information.

H R Venkatesh
Mar 17 · 5 min read
Photo credit: Roman Kraft/Unsplash

Note: If you already believe we’re awash in digital pollution, skip ahead to section 2.


We’re being drowned in an ocean of information every second, every minute and every hour of our lives. Some of it is useful (important emails, text messages, important news) and some of it is essential for our relaxation (books, movies).

But a big chunk of it is just junk. Our email inboxes are stuffed with newsletters we don’t remember subscribing to, our Twitter feed makes us angry, YouTube is a time sink and our WhatsApp has hundreds, maybe thousands of muted messages. And then there’s the information that harms us, such as all the disinformation and misinformation emerging from people with political, social, foreign and commercial motives.

We think we have control over this, but really, we don’t. Yes, there are hundreds of thousands of people the world over who’ve dedicated their lives to get us reliable information. I’m thinking of fact-checkers, educators, some types of journalists and activists and more. Their efforts are of critical importance, but we need much, much more.

Something radical.


There are at least three ways to think about this problem.

First, there’s the Geneva Conventions, a set of rules that were evolved to help soldiers and civilians deal with war. Among other things, these conventions ban the use of torture during war between nations. This doesn’t mean that people aren’t tortured anymore, or that all kinds of conflicts are covered. And not every nation has signed them. But the adoption of these conventions by more than 180 countries has considerably reduced the scale and intensity of torture globally. Compared to a couple of hundred years ago, the difference is staggering.

Wouldn’t it be great if we had a Geneva Convention for some types of toxic information? It wouldn’t be perfect, but at the very least, it would be a powerful symbolic statement.

A second way to think about the problem is from a human right’s perspective. The Universal Declaration of Human Rights is a document that recognizes some 30 rights. Other than individual rights, they also address social and cultural rights, and group rights.

Similarly, we can argue that the right to reliable information should be considered a human right.

A third and perhaps more effective way to frame the problem is to look at it from a health perspective. Just as every person should be able to access clean air and clean water, so should they be able to access ‘clean’ information.

Stanford professor James T. Hamilton, who also directs the journalism program, suggests that this framing could be effective because it does one more thing: potentially unite those on the left and the right.

When I spoke with him, Hamilton also suggested that I use the term ‘digital pollution’ and pointed me to two others who’ve used it as a framing device: Judy Estrin, a networking technology pioneer and Sam Gill, a vice president at Knight Foundation, an organization which routinely funds community-building and journalism.


In a piece published in Washington Monthly called “The World is Choking on Digital Pollution”, Estrin and Gill use the cleaning of the river Thames in London as a model for today’s information overload.

It was at the height of the Industrial Revolution in the 1860s, they say, that the term ‘pollution’ came to be used in the modern sense. This framing of the problem of human and industrial waste in the river as pollution, they write, was crucial to finding a solution: “A problem without a name cannot command attention, understanding, or resources — three essential ingredients of change.”

Estrin and Gill say that today, digital pollution is most visibly about “increased anxiety and fear, polarization, fragmentation of a shared context, and loss of trust…”. But there are more side effects. They continue, “potential degradation of intellectual and emotional capacities, such as critical thinking, personal authority, and emotional well-being, are harder to detect. We don’t fully understand the cause and effect of digital toxins.”


I first started thinking about Information Consensus in November 2018 after a conversation with another JSK Fellow here at Stanford. But the real beginning to all this was early 2016, when I started NetaData in India to focus on political accountability and polarization. In 2017, I began to map out the disinformation landscape in India and aggregate potential solutions. By the time I’d started putting together a coalition of newsrooms a year later, I had already realized that fact-checking alone wasn’t going to be as effective in tackling disinformation as I had initially thought. (Fact-checking is still needed to correct the historical record, but it doesn’t seem to be as impactful when used as a tool to get people to change their minds about false information.)

To me, a more promising approach seemed to be media literacy, or the education of the reading and viewing public on how to consume information. However, I’d barely got to understanding media literacy approaches when I realized that technological design (websites, social media, apps, phones) played a role too. There was even a role for policy or government to help people navigate the toxicity of information.

What we needed, it finally seemed to me, was a multi-pronged approach to address the issue.

Now, one way to do that is to aggregate all current bottom-up approaches to help us access clean information. (Just to mention two initiatives in the media alone, there’s the Credibility Coalition and the Trust Project.)

Another way is to see if any top-down approaches work. Digital pollution is after all a problem that touches many professional spheres and geographies. And so, the more I thought about it, the more I felt that we urgently need to bring people from all over the world together to talk about this.

Hence, Information Consensus.


Whatever we choose to call this effort, there are at least two questions that need answering: How will we bring about a consensus? And who decides what is safe or reliable information?

These are questions without comprehensive answers right now, but at a minimum, any such effort will need to be global and inclusive. So not only do we need to engage with folks who represent media, technology, academics, policy and government from all nations, we also need people represented fairly across gender, race, language, caste, class, region, sexual identity and the ability/disability spectrum.

For such a global consensus too, the Thames example holds lessons for us. Estrin and Gill write, “Relief came from bringing together the threads needed to tackle this type of problem — studying the phenomenon, assigning responsibility, and committing to solutions big enough to match the scope of what was being faced.”

Thanks to them and others like James Hamilton, we have a defined problem (digital pollution). Now we have a potential solution (information consensus).

Note: If you decide that Information Consensus is worth thinking about, then at the very least, it’s worth talking about as well. Let’s start the conversation, and join other such conversations. Please consider sharing this piece to help the idea spread.

JSK Class of 2019

Insights and updates from members of the John S. Knight Journalism Fellowships Class of 2019 at Stanford University

H R Venkatesh

Written by

Journalism Fellow @JSKstanford | Hacks/Hackers Delhi & NetaData | Alum: CUNY, Oxford, SIMC,

JSK Class of 2019

Insights and updates from members of the John S. Knight Journalism Fellowships Class of 2019 at Stanford University

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade