Sherif Elsayed-Ali
Aug 4, 2017 · 7 min read

What if, instead of AI research and development being dominated by giant tech companies, there was a global public AI research institution advancing the field for the good of humanity, free from the interests of a handful of companies? What if it was owned by millions of ordinary people across the world?

Gary Marcus put forward the idea of a CERN for AI at the #AIforGood summit in Geneva in June. I found it to be an inspiring, potentially transformative idea. He reiterated the need for it recently in an op-ed for the New York Times:

I look with envy at my peers in high-energy physics, and in particular at CERN, the European Organization for Nuclear Research, a huge, international collaboration, with thousands of scientists and billions of dollars of funding. They pursue ambitious, tightly defined projects (like using the Large Hadron Collider to discover the Higgs boson) and share their results with the world, rather than restricting them to a single country or corporation. Even the largest “open” efforts at A.I., like OpenAI, which has about 50 staff members and is sponsored in part by Elon Musk, is tiny by comparison.

To me, the most important part of the idea is this:

[Scientists] share their results with the world, rather than restricting them to a single country or corporation.

AI is already a very powerful technology and its future potential is immense. When companies like Google and IBM invest in, and develop AI applications, they are inevitably following a specific corporate strategy. They develop technologies that can have real and tangible social benefits but this will never be their main goal — there is no escaping the fact they are companies with investors who want the maximum return on their money. They are not social enterprises.

Which is why the idea behind OpenAI, the non-profit AI research organization set up by Elon Musk, Sam Altman and other tech funders, is important. Its mission is “to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible.” It’s small with a staff of 60 people and despite having $1 billion pledged in funding, OpenAI expects “to only spend a tiny fraction of this in the next few years”, reinforcing its limited scope. It is also very US centric — while not a criticism, its founders are US based — it doesn’t address the concentration of AI research in a very small number of countries. OpenAI is important but nowhere near enough to fulfill the need for AI research in the public interest.

So, where can the funding for an international CERN for AI come from?

The traditional answer to finding large amounts of money for international public interest collaborations is government funding. This makes sense. For rich countries, a few hundred million dollars a year is a negligible proportion of public expenditure. Get just 10 richest countries to put even a fraction of that and you have a budget of several hundred million dollars a year. But there are several major difficulties:

  • Austerity: the world is still stuck in austerity economics. At a time when many wealthy countries are cutting down on public services (and science research funding) it may be difficult to find additional money for new research. At best, such funding will be diverted from other science research funding.
  • Prospects for international collaboration: with the Trump administration’s aversion to science (e.g. its attitude to climate change) and general approach towards international cooperation, the world’s largest economy might be out of play. With Brexit, the UK is withdrawing from EU institutions and has little bandwidth to do much beyond the mess of untangling itself from the EU. Getting Russia and, say, France and Germany to work together is unlikely to happen anytime soon and China is a big gamble. What you’re left with is a small number of countries with a strong scientific research tradition who have good relations with each other , the likes of Canada, France and Germany. Many less wealthy countries with strong academic research institutions could also be potential collaborators — diversity in countries backing such a project would make it stronger.
  • Bureaucracy: anyone who has dealt with multi-lateral institutions — the UN, the EU, the African Union, etc.. knows they are complicated. It’s a direct result of their membership and the politics of each member state. While political considerations may not have much bearing on a purely scientific research initiative — this changes once you try to develop AI applications for good. Commonsense environmental, humanitarian and human rights applications can become surprisingly controversial when governments hold the purse strings.
  • Time: governments are not nimble, no surprise there. Even if obstacles around money and collaboration can be overcome, it would take years to get agreement on governance structures, funding, staffing allocations and actually get the money in the bank.

So what’s the alternative?

A variation to the CERN idea that was proposed at the #AIforGood Summit was for an international AI research network that would be distributed across different regions and countries, rather than being centralized in the way CERN is. There are good reasons for this — AI research doesn’t need a single massive physical facility like particle physics, and a distributed network would allow funding research in lots of different countries.

Together with the Office of the High Commissioner for Human Rights, I led discussions on issues of equality in AI at the summit, and a critical issue that was identified was the importance of diversity in developing AI applications. This is not just a nice thing to have but critical if we want AI applications to be locally relevant and accurate. We can’t assume that any AI system developed in Palo Alto will work as intended in Cairo.

Serious question: can a self-driving car trained in Silicon Valley recognize a donkey cart?

But distribution doesn’t have to end here. Instead of government funding, this AI research network can be directly funded by the public, with millions of small, regular donations. Sounds far fetched? Hardly.

To give a few examples:

  • In the US alone, individual charitable giving reached more than $260 billion in 2015
  • Australians donated 12.5 billion Australian dollars in 2016
  • Individual donations from London alone amount to $2 billion/year
  • According to the Charities Aid Foundation (CAF)[opens PDF]Nearly one third of the world’s population donated money to charity in 2015

Every year, hundreds of billions of dollars are donated to good causes. A large proportion goes to disaster relief, poverty reduction, children in need and other causes to alleviate suffering in the short-term. But, to take the UK as an example, 26% of respondents to a CAF survey [PDF] said they donated to a medical research charity. This shows great appreciation of the importance of funding research for long term public (and personal) interest.

Several large successful global charities rely on small donations from millions of individuals. Such a large financial support basis should be much more stable than funding from a handful of governments as it would not face the continuous risk of falling out of favour due to changing political circumstances.

This is a tested model that can work for public interest AI research. But to be successful it needs to get many people interested — and keep them interested.

If we cam blame science fiction for making everyone* think of terminator whenever they hear the words artificial and intelligence together, it can also be commended for making people fascinated by the future possibilities of technology in general and AI in particular. We only have to look at the popularity of sci-fi and the headlines that any largely banal snipet of news on AI provoke to know there is huge public interest in AI.

Transforming that interest into reliable financial support needs good communications, telling the story of where AI technology is, where it can go and what it can do for humanity. For long term support, a CERN-like initiative for AI would need to be able to demonstrate positive impact in the world, not in 20 years time but in two years time, which is why I believe a core part of an initiative like this should be developing AI for good applications.

Lastly, governance. Giving money for a good cause provides a degree of satisfaction, but a membership model would be much more attractive to people. In such a model, members, who donate regular amounts of money, would have a say in the running of the initiative.

Of course, science should be left to scientists, but when it comes to developing practical for good applications, members could have a say, for example by voting on how money is allocated among various competing ideas. Members could also directly contribute to AI research by helping classify data and train algorithms.

A membership model would give a broad democratic basis for such an institution and would give millions of ordinary people a stake in one of the most important technological developments in history. A people’s AI could truly embody the ideas of #AIforGood.


Intuition Machine

Deep Learning Patterns, Methodology and Strategy

Sherif Elsayed-Ali

Written by

I grew up in Egypt, live in London & work on human rights. Dad of two. Liberal, occasional tree hugger. Also into tech and cooking.

Intuition Machine

Deep Learning Patterns, Methodology and Strategy

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade