GIF by @craigpickard on Giphy

Social Scientists and AI

For Safety, Society and Science?

Alex Moltzau
AI Social Research
Published in
6 min readJun 11, 2019

--

Yes I Study Social Science

I would like to begin the article by clearing the air. I thought I would get that off my chest too, so you understand the position from which this article is written. I am a student at the Faculty of Social Science at the University of Oslo. My subject background is however a mix of anthropology, ethics, machine learning, management and political science. A mixed bag so to speak.

This short text is inspired by an article affiliated with OpenAI also published in the journal Distill. In this connection let me explain both of these entities before proceeding to some thoughts in regards to the article.

OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. They are a team of a hundred people based in San Francisco, California. The OpenAI Charter describes the principles that guide them as they execute on their mission. We may examine this charter closer at a later time perhaps in another post.

Distill is a modern medium for presenting research. It is a journal for research on machine learning. It argues that the web is a powerful medium to share new ways of thinking. In their opinion research should be clear, dynamic and vivid. It is devoted to explanations native to the web as opposed to the typical pdf. format in science.

AI Safety Needs Social Scientists

The paper by the same name which this title discusses our perceptions on AI safety. It came to my attention when it was posted on the OpenAI blog earlier this year in February. You can read the full paper if you click here and it is written by Geoffrey Irving and Amanda Askell.

Geoffrey previously worked at Google Brain and now works with AI Safety at OpenAI with a PhD background from Stanford. Amanda works with ethics and policy at OpenAI and has a PhD in Philosophy at New York University as well as a BPhil from Oxford.

All quotes in this text will be from the aforementioned paper.

Their writing argues that the AI Safety community needs social scientists to tackle a major source of uncertainty about AI alignment algorithms: will humans give good answers to questions? The simple answer is to include social science in the development of machine learning (ML) and in most cases prototype with human debaters before implementing ML.

Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Importance.

GIF by @annasalmi on Instagram

Properly aligning advanced AI systems with human values will require resolving many uncertainties related to the psychology of human rationality, emotion, and biases. These can only be resolved empirically through experimentation — if we want to train AI to do what humans want, we need to study humans.

  • Their proposed solution is to replace machine learning with people, at least until ML systems can participate in the complexity of debates we are interested in. Try with human participants first before considering to replace with ML
  • Think about how alignment algorithms will work once we advance to tasks beyond the abilities of current machine learning
  1. For the specific example of debate, we start with debates with two ML debaters and a human judge
  2. Then switch to two human debaters and a human judge
  3. The result is a pure human experiment, motivated by machine learning but available to anyone with a solid background in experimental social science
Image from original article in Distill.

Lastly they encourage collaboration so be sure to contact them or read up on the different sources they mention in their paper.

There are many institutions engaged with safety work using reward learning, including our own institution OpenAI, DeepMind, and Berkeley’s CHAI. The AI safety organization Ought is already exploring similar questions, asking how iterated amplification behaves with humans.

Disclaimer: the paper AI Safety Needs Social Scientists focuses on the human debater and it is far more complex than presented in this short article. This short text is reductive and only aimed at sparking your interest in the topic.

Afterthought On Further Risks

It is curious not to see different subject areas within the social sciences more involved in the question regarding technology. As mentioned in my last article on Scandinavian AI Strategies in 2019 the Swedish strategy for AI contains thoughts regarding sustainability. If the application within the field of AI is for society, which I presumes it is, and for the ecological sustained habitable planet we are on — then the field of AI must be far more inclusive.

As an example social science is not mentioned with a word in the AI Sector Deal by the United Kingdom. It is not mentioned either in the Swedish National AI Strategy that had a stronger focus on sustainability.

The paper I have looked at in this article does not talk much about how we can change this focus rather it talks closer in regards to methods in training AI through machine learning techniques foreshadowed by human decision-making. It does not talk of which different fields within social science beyond philosophy that could be important to the development of AI.

The paper also creates an unfortunate dichotomy or oppositional pair between science and engineering:

Most social science seeks to understand humans “in the wild”: results that generalize to people going about their everyday lives. With limited control over these lives, differences between laboratory and real life are bad from the scientific perspective. In contrast, AI alignment seeks to extract the best version of what humans want: our goal is engineering rather than science, and we have more freedom to intervene.

Why is this an unfortunate oppositional pair? Is not engineering a science and can science be engineered? This perspective ignores a large field of research within critical views within Science and Technology Studies (STS) which is certainly a field of its own. Yet I am inexperienced so perhaps I am wrong.

Conclusion

  1. We need a closer consideration of social sciences in the field of AI.
  2. Investments in AI Safety needs to be in a diverse range of social sciences and collaboration between fields. It will truly be the challenge of our lifetime to ensure appropriate applications within the field of AI.
  3. Viewing science as opposed to engineering may be dangerous for AI Safety. These views and their American-English roots should be questioned perhaps by scholars in Science and Technology Studies.
  4. If sustainability or social challenges are stated as important in AI strategies yet no investments are made to follow there must surely be room for improvement in this regard.
  5. If left unchecked this could cause large issues for humanity going forward. This is clearly a severe lacking aspect in current AI Strategies.

So with that day nine of #500daysofAI is over and out.

Apologies for my strict wording in this last segment, however I wanted to speak frankly to you reading this article. I would be overjoyed to hear your opinion in the comment section if you agree or disagree.

What is #500daysofAI?

I am challenging myself to write and think about the topic of artificial intelligence for the next 500 days with the #500daysofAI. It is a challenge I invented to keep myself thinking of this topic and share my thoughts.

This is inspired by the film 500 Days of Summer where the main character tries to figure out where a love affair went sour, and in doing so, rediscovers his true passions in life.

I hope you stick with me for this journey and tell me what you think!

--

--

Alex Moltzau
AI Social Research

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.