Interdisciplinary work in AI development and technological implementation

Jacob Heinricher
ANTH374S18
Published in
2 min readFeb 22, 2018

One of the main ideas we talked about this week is the inherent biases in terms of what questions are asked and how the answers are framed that effect any discipline. We also talked about the value of interdisciplinary work in combating these biases and producing more complete and relevant “science”. Development of Artificial intelligence and increased implementation of automated systems promises to continue to change to the world in drastic ways, and yet, as the the article below outlines, there isn't an agreed upon way to assess the impact of these technologies on human populations.

Rather than deploying new systems and adjusting to issues that arise or relying on incorporating desired values into the design the authors of this article argue for a new, “Social-Scientific” approach.

“A practical and broadly applicable social-systems analysis thinks through all the possible effects of AI systems on all parties. It also engages with social impacts at every stage — conception, design, deployment and regulation”

For example,

“As a first step, researchers — across a range of disciplines, government departments and industry — need to start investigating how differences in communities’ access to information, wealth and basic services shape the data that AI systems train on take, for example, the algorithm-generated ‘heat maps’ used in Chicago, Illinois, to identify people who are most likely to be involved in a shooting. A study published last month indicates that such maps are ineffective: they increase the likelihood that certain people will be targeted by the police, but do not reduce crime.

A social-systems approach would consider the social and political history of the data on which the heat maps are based. This might require consulting members of the community and weighing police data against this feedback, both positive and negative, about the neighborhood policing. It could also mean factoring in findings by oversight committees and legal institutions. A social-systems analysis would also ask whether the risks and rewards of the system are being applied evenly — so in this case, whether the police are using similar techniques to identify which officers are likely to engage in misconduct, say, or violence.

This is just one example of many of how an interdisciplinary, “Social-systems”, approach to developing and implementing new technologies can reveal blind spots in the perspective of some in can help ensure that these advancements get built and used in ways that are helpful and grounded in real world experience. This is also a perspective increasingly taken up by those in the AI community …

Crawford, Kate, and Ryan Calo. “There Is a Blind Spot in AI Research.” Nature: International Weekly Journal of Science, Nature, 13 Oct. 2016, www.nature.com/news/there-is-a-blind-spot-in-ai-research-1.20805.

--

--