5 Tips for More Inclusive AI Research

Lessons for the new Partnership on AI Nonprofit

Two weeks ago, Amazon, Google, Deep Minds, Facebook, IBM and Microsoft launched a new nonprofit called, Partnership on Artificial Intelligence to Benefit People and Society. Generally, the goals of this venture are to: (1) support best practices, (2) create an open platform for discussion and engagement, and (3) advance understanding. Sounds ambitious and timely, you might say.

Although, a step in the right direction, the Partnership on AI does highlight a certain conundrum — what exactly is it that we want from Silicon Valley’s tech giants? Do we want a seat at their table? Or are we asking for a deeper and more sustaining type of participation? Or perhaps, more disturbingly, is it too late for any truly inclusive and meaningful participation in the development of future AI technologies?

Why may it be too late? Primarily because much of this technology is already being developed behind closed doors. Additionally, Google’s AI ethics and research board continues to lack transparency despite popular call for greater openness. We have also seen that Elon Musk’s more technically-oriented nonprofit, OpenAI, has not done much for engaging a wide variety of participants.

So, what are we to make of this?

On the downside, the Partnership for AI may create just another, somewhat wider inner circle, of already elite-educated academic panels, scholars, privacy advocates, etc., who will deliberate and shape the discourse that surrounds the research and development of AI, while Silicon Valley continues to quietly and behind close doors develop the actual technologies. Largely, continuing to leave out minorities, women, and others, who are already unrepresented in this sector and who many advocates have shown, stand to lose the most in this increasingly data-driven society.

Therefore, below is a brief list of tips in order for the Partnership on AI to truly make some meaningful headway towards inclusivity and participation:

  1. Check your culture at the door: Its hard to ignore that there’s a growing transhumanist paradigm in Silicon Valley. Transhumanism is the intellectual and cultural movement supporting the use of science and technology to improve human mental and physical characteristics. You may recall Elon Musk’s recent transhumanist discussion about neural at the 2016 Code Conference. This techno-utopianism that exists within Silicon Valley is at stark contrast with the daily realities of most of us. It also has an ugly dark side — the majority of us will not benefit from Silicon Valley’s quest to become “superhuman”, and those already marginalized and often discredited in society stand to lose the most.
Are we moving towards a transhumanist society?

2. Check your values at the door: Yet at the same time, like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters more than ever. Its not just about adding 2 new minority or women software engineers or program staff to your business, but its also about looking out for echo chambers within your organizations- from the folks who are designing software to those that sit on the company boards. If we don't continuously monitor the value systems that exists within our organizations, there is no doubt that we run the risk of developing machine intelligence that mirrors a narrow and privileged vision of society, with all of the familiar biases and stereotypes.

3. Provide opportunities for regular citizens to have a say: Silicon Valley finally needs to take significant steps to overcome its largely homogeneous social circles. Some ideas include, providing scholarships for minority students and women in public universities around the world to get engage around these topics. Perhaps, more dramatic, look to hold community workshops around the country to talk about the vision that Silicon Valley has for how machine learning technologies could impact society.

4. Learn to appreciate non-scientific expertise: Science always tends to create a ‘technocracy’ (Habermas, 1973; Fischer, 1990, 1993b, 1995). Scientific expertise is often power. Its historically played a major role in framing debates which in many cases become dominant frameworks and paradigms upon which social norms are established. However, although this type of expertise empowers some, it also silences others, such as lay publics who are often labeled ignorant and incapable of handling the scientific complexities.

5. Opt to become a communicative institution: A communicative institution allows multiple perspectives to come into debate, and, through processes of argumentation, the negotiation of goals and values and appropriate courses of action results. This is what the machine learning industry needs. Not another echo chamber, but tension, deliberation, and discussion which doesnt negate the scientific knowledge, but rather transforms it so that the deliberations of scientists are subject to “extended peer review” (Funtowicz and Ravetz, 1993).