The evolution of AI research and the study of its social implications

Morgan R. Frank
MIT MEDIA LAB
Published in
10 min readMar 8, 2019

This blog post summarizes the key findings in our new article The evolution of citation graphs in artificial intelligence research, published in Nature Machine Intelligence. The research team includes myself, Dashun Wang, Manuel Cebrian, and Iyad Rahwan. We analyze the last 60 years of publications within the booming research area of artificial intelligence (AI). Scientific impact within the AI research community is becoming less diverse with industry research institutions overtaking academic ones as the central hubs for AI research. This observation is concerning because social science research is less likely to make reference to industry-authored AI research. This dynamic may allow the social and societal implications of AI research to go unnoticed prior to deployment.

While human-level intelligence remains difficult, AI has excelled in traditionally human-areas of cognitive ability including vision, pattern recognition, and language. In addition to new capabilities and productivity, how will the continued deployment of AI impact employment, cooperation, happiness, and society?

AI’s history as a narrative tool for social implications

Why consider the social implications of artificial intelligence (AI)? Across time and culture, humanity has maintained a mythical concept of artificial human intelligence — although not always in the form of algorithms and machines. Rather, humanity’s quest for AI began with “an ancient wish to forge the gods” that has persisted from mythology, folklore, and science fiction to today’s technology. Yet, both mythological “AI” and technological AI highlight important social questions for society.

Allowing the generalized notion of artificial human intelligence, several stories from folklore and mythology use AI to highlight important social and cultural issues. As an example, Hephaestus — the Greek god of metalworking and stone masonry — created mythical golden robots, who could think for themselves but acted as servants on Mount Olympus. Jewish folklore maintains several stories of humanoid golems made from clay. The most famous is the Golem of Prague, who went into a destructive rage after breaking the cultural taboo of working on the Sabbath. In Mary Shelley’s Frankenstein, the monster is shunned by its creator, thus damaging the monster’s social nature. Combined with the monster’s later insistence on his right to a partner, this story acts as a commentary on the ethics of AI and humanity’s paternal obligation to the intelligence it creates.

While these examples may seem distant from today’s AI, they highlight humanity’s persistent quest to create intelligence and some of the interesting social questions that lie therein. Accordingly, the ethical, social, and societal dynamics around AI adoption contribute to the total impact of AI that we face today. But are social scientists able to keep pace with the recent boom in AI research?

Recent AI and its social implications

More recent AI has dramatically matured from myths and parables to code and algorithms. Accordingly, AI research has not remained any one thing, and the mimicry of human intelligence has adapted with new technology. Recent AI advances include voice-activated assistants (e.g., Amazon Alexa and Apple’s Siri), autonomous vehicles, and mastery of Go with AlphaGo.

Autonomous warfare may reduce the human cost of war. How would this alter our willingness to engage in conflict?

Modern AI adoption is particularly noteworthy considering its implications for the future of work, the impact of high-frequency trading on the stock market, the practice of medicine with data-driven diagnostics and computer vision, the regulation of transportation, the future of autonomous warfare, and the governance of society. On one hand, AI has the positive potential to reduce human error and human bias. As examples, AI has balanced judges towards more equitable bail decisions, AI can assess the safety of neighborhoods from images, and AI can improve hiring decisions for board directors while reducing gender bias. On the other hand, several recent examples suggest that AI technologies are frequently deployed without understanding the social biases they may possess or the social questions they raise. Consider the recent reports of racial bias in facial recognition software, the ethical dilemmas arisen from autonomous vehicles, and income inequality from computer-driven automation.

Check out https://www.ajlunited.org/ to learn more about bias in facial recognition and the Algorithmic Justice League.

These examples highlight the diversity of today’s AI technology and the breadth of its application, an observation that has led some to recently characterize AI as a general purpose technology — much like electricity or running water. That is, AI is cheaply interwoven into the everyday, yielding improvements in everyone’s quality of life. As AI becomes increasingly widespread, researchers and policy makers must balance the economic and social implications of AI adoption.

Accordingly, we ask:

How tightly connected are the social sciences and cutting-edge AI research?

We investigate this question through the Microsoft Academic Graph, which is “a heterogeneous graph containing scientific publication records, citation relationships between those publications, as well as authors, institutions, journals, conferences, and fields of study.” In particular, we focus on the booming area of AI research and its related fields. Paper production in Computer Science (CS) has grown dramatically from the 1950s to today, placing CS as the fifth most productive academic field in 2017. This growth within CS coincides with a transition in focus from databases, hardware, and data science to AI, machine learning, pattern recognition, computer vision, and natural language processing. Accordingly, we refer to these latter CS subfields as AI and related fields.

The strength of association between CS subfields from the references of publications in each field. Line and arrow widths correspond to rates of references made between publications in each subfield. Network clustering identifies collections of related research areas within CS. Out of all CS subfields, AI and its related fields have produced the most publications in recent decades.

Which fields are important to AI research?

External fields reference AI research for a number of reasons. Some fields, such as engineering or medicine, reference AI research because they use AI methods for optimization or data analysis. Other fields, like philosophy, reference AI research because they explore — for example — its moral or ethical consequences for society. Similarly, AI researchers reference other fields, such as mathematics or psychology, because AI research incorporates methods and models from these areas. AI researchers may also cite other fields because they use them as application domains to benchmark AI techniques.

We measure the strength of association between academic fields according to two measures. First, we consider the share of references made by publications in one field to publications in another field. Although a common metric in bibliometric studies, increasing reference share from other fields toward AI may be biased by the exponential growth of AI paper production over time. Therefore, we also consider a new measure called reference strength that additionally controls for the cumulative number of published papers in the referenced field of study.

How AI research makes reference to other academic fields. Fields receiving notably high rates of citations from AI research are highlighted with solid lines.

Prior to around 1980, AI research made frequent reference to psychology, in addition to CS and mathematics. Controlling for the paper production of the referenced fields, we find that early AI’s references towards philosophy, geography, and art were comparable to the field’s strength of association with mathematics. This suggests that early AI research was shaped by a diverse set of fields. However, soon after 1987, AI research became increasingly computational with a stronger reliance on mathematics and CS.

Which fields keep pace with AI research?

Unsurprisingly, computer science, which includes all of the AI-related subfields in our analysis, steadily increased its reference share to AI papers throughout the entire period of analysis. Surprisingly, mathematics experienced a notable increase in reference share to AI only after 1980, which marked the first International Conference on Machine Learning (ICML).

Using computer vision to identify buildings from satellite imagery.

Meanwhile, several fields that are not often cited in today’s AI research played an important role in the field’s development. However, many of these fields did not reciprocate this interest. For example, psychology was relatively important to early AI research, but psychology did not reciprocate this interest at any point from 1990 on-wards. Instead, philosophy, art, engineering (e.g., robotics), and geography (e.g., computer vision for satellite imagery) have increased their reference share to AI papers up to 1995. When we control for AI paper production over time, we observe decreasing reference strength towards AI from all external academic fields. This suggests that other fields have difficulty keeping track of increasing AI paper production in recent decades. In particular, areas of social science — including political science, economics, and sociology — make relatively few references to AI publications relative to the volume of AI research.

How other academic fields make reference to AI research. Fields exhibiting notably high rates of citations towards AI research are highlighted with solid lines.

Why are other fields falling behind? The consolidation of AI research

This result may in part be explained by the increased complexity of AI-related research that is not relevant to the study of other scientific disciplines. But the AI research community has also undergone dramatic changes in the last 60 years that may give further insight.

Firstly, where are AI researchers publishing their research? In general, academic fields have varying preferences for publication in journals versus conferences and for general/interdisciplinary science research versus topic-specific research. Philosophies abound to justify one preference over the other. On one hand, general science publications — which are usually academic journals — have the potential to reach a wider audience but may incentivize research breadth instead of research depth. On the other hand, topic-specific publication venues, such as conferences, can foster competition to advance major problems within the research area without fostering knowledge dissemination across research communities.

The PageRank centrality of AI publication venues over time. Solid lines highlight the most prominent venues for AI publications today, which are almost entirely topic-specific conferences.

When we examine the references made between AI publications as a citation network, we can identify prominent publication venues using network measures for centrality (i.e., citation PageRank). The prominence of topic-specific AI conferences have been on the rise since around 1990. The one notable exception is the IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE TPAMI) journal. While these conferences promote tightly knit research communities, they can be exclusionary to scientists from other fields. For example, NeurIPS 2018 (formerly NIPS) — one of the largest AI conferences — sold out of registration spots in under 15 minutes, thus making attendance difficult for active AI researchers — let alone interested scientists from other fields. As a counterpoint, AI researchers frequently make their research available to everyone on the Internet through free-to-read web repositories, such as the Arxiv.

Secondly, who is driving AI research? Publication in academic conferences has been shown to boost the scientific impact of prestigious research institutions. So, is there preferential scientific impact within AI research, and, if so, then who benefits?

Diversity of scientific impact is decreasing within the AI research community.

In general, we observe a steadily increasing tendency towards preferential referencing within the AI research community. That is, AI research institutions that have already accumulated many citations for their existing AI publications are more likely to receive further citations for their AI research in the future. Preferential referencing enables a “winner take all” dynamic that can impact the diversity of scientific contributions within a field. Accordingly, we find that the diversity of scientific impact within AI is decreasing on aggregate in terms of the number of AI authors, papers per institution, and citations. This trend is rather specific to the AI research community, as most other academic fields actually exhibit increasing diversity of scientific impact across institutions.

Which institutions are at the center of AI research? Prior to 1990, the most prominent research institutions were academic, including the Massachusetts Institute of Technology, Stanford University, and Carnegie Mellon University, and included only a few industry-based research institutions, such as Bell Labs and IBM. However, the late 1980s again marks a transition point that reshaped the field. While universities dominate scientific progress across all academic fields, industry-based organizations, including Google and Microsoft, are increasingly central to modern AI research while the prominence of academic institutions are on the decline.

Academic institutions are losing prominence as industry-based institutions assume a central role in AI research.

(Note: the recent rise of Chinese institutions within AI research is notably absent from this long-time analysis. However, the data indeed highlights the growing prominence of China-based AI research in more recent years.)

Ranking the prominence of AI research institutions highlights the growing impact of China-based AI research in recent years.

What does this mean for AI and social science?

It does not look good.

This transition towards industry is alarming for studying the social and societal dynamics of AI technologies. Social science research is less likely to reference AI publications with authors who have industry-based affiliations. Furthermore, AI research exhibits decreasing reference strength towards social sciences. These observations all suggest this gap between research areas will continue to grow. The fields that study social bias, ethical concerns, and regulatory challenges may be ignorant of new AI technology — especially when deployed in industry.

While our interpretation of these results is somewhat speculative, we believe our observations may highlight an important dynamic within the AI research community that merits further investigation. These findings may help explain why recent AI technologies have only recently revealed important (and largely unintentional) social consequences, such as racial bias in facial recognition software, the ethical dilemmas arisen from autonomous vehicles, and income inequality in the age of AI. If current trends persist, then it may become increasingly difficult for researchers in any academic field to keep track of cutting-edge AI technology.

Closing the gap

The gap between social science and AI research means that researchers and policy makers may be ignorant of the social, ethical, and societal implications of new AI systems. While this gap is concerning from a regulatory viewpoint, it also represents an opportunity for researchers. The academic fields that typically inform policy makers on social issues have the opportunity to fill this gap. While our study is a step towards this goal, further work may explicitly quantify the social and societal benefits and consequences of today’s AI technology and identify the mechanisms that limit communication between research domains.

--

--

Morgan R. Frank
MIT MEDIA LAB

PhD candidate at MIT's Media Laboratory. Using computational methods to address social problems at the Scalable Cooperation research group.