Data Monopolies — A Novel (?) Solution to Avoid Them

Achal Agrawal
Jul 3, 2017 · 4 min read

Ever wondered why Uber gives you insane discounts when they begin their service in your city? What’s in it for them? Why, they are just creating their customer base, you say. It is an age old trick. Nothing new here.

Except that there is something more sinister going on. You see, in this ‘data age’ (buzzword alert!), data is the new means of production. Move on capital and labour. The company that has more data can provide better services, because bigger the data, better it is. In turn, the company gets more customers, because well, better services. More customers means even more data, and so on and so forth. I think you can see where this is going.

Such network effects (jargon alert!) invariably lead to monopolies: Google, Facebook, Amazon, Uber, Airbnb, Blablacar… to name the more famous ones. This is where it’s at. This is their playbook. Bleed investor money, get data monopoly, and milk the customers progressively ad infinitum. Drug peddlers follow the same model.

This also explains why investors are willing to dig so deep into their pockets to support startups in the search of the unicorn, because the gains to be had are potentially infinite. This leads to a lot of wastefulness in terms of (already scarce) societal resources.

Image copy-pasted without permissions (Sue me!) from

It is clear that monopolies are bad for everyone (except for the monopoly, duh!). Various anti-trust laws exist to address precisely this issue. However, those laws were made when ‘data-monopoly’ was not a concept. New laws need to be made to account for these new models.

To be fair, some governments are aware of this problem and are already taking action. Google just got fined and berated by EU for blocking competitors by pushing their own services on their search engine. Unfortunately such policing is incredibly hard to do, not the least because the algorithms used by the services are guarded secret (for good reasons). Detecting any fraud therefore requires considerable effort of reverse engineering.

To be sure, I am not the first one to raise this issue. Far from it. For a comprehensive debunking of myths surrounding Big Data and competitivity see this excellent article by Stucke and Grunes. There are also some counterpoints to my argument. Information Technology and Innovation Foundation (ITIF)recently came out with an article which suggested that there is no need to create anti-trust laws specifically for data intensive domains. The article is fraught with bad reasoning and is uniquely lacking in data to support their claims. A quick look at their funders makes the reason for this clear. Google is one of their funders. Nuff said.

Alright, I see the problem, but what can be done about it?

While the problem has been known for a while, to the best of my knowledge, the solution I propose has never been discussed before. Feel free to correct me on this (or anything else) in the comments section.

We had a similar problem long back. Inventors and innovators would keep the technology behind their inventions secret as it was the only way for them to safeguard against potential infringements. To circumvent this issue, patent law was devised (as early as 1450 A.D.). Patents protected the inventor from infringements for a certain period, and in exchange the inventor revealed the technology invented. This ensured that innovators had the incentive to innovate, as well as to share their technologies.

A similar system can (and should!) be devised for data. The principle remains the same : let companies keep their data secret for a fixed amount of time, after which they will have to share their data with the competitors. Again, this would ensure that companies still have an incentive to innovate as they will still have the data advantage, albeit much lesser than they do now. On the other hand, it will also make sure that competition can catch up in the long term.

To be more precise, data will need to be shared a fixed time after it has been created. So, a company will always have more recent data than its competitors. When the market is growing very fast, this advantage will be crucial as the predictive data models will be changing too. However, if the market stagnates, recent data will cease to be an advantage as old data will be good enough for doing predictive analysis.

Of course I make it sound too simplistic. The aim of this article is to just propose a novel solution and demonstrate the need for it. There are security issues to be dealt with. There is also the matter of thinking more deeply about what kind of data we are talking about: profile data, operational data etc. Also, more importantly perhaps, we need to discuss the infrastructure that will be required for such a mechanism to work.

I think these issues are not insurmountable and given the current trend of data monopolies propping up in every field there is an urgent need to adress the issue and find a workable solution.

If you agree with me, please do share this article. Awareness about this issue is the first step towards the resolution. Given the might of data-monopolies and the powerful lobbies they fund, the only way for lawmakers to hear us is by creating collective consciousness about this issue.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade