AI and the Social Sciences, Part I

Danil Mikhailov
Jan 7 · 4 min read

A few years ago my background in social sciences and the humanities (anthropology, philosophy, sociology, what have you…) was thought unusual, to say the least, in the tech industry. Today, that background is all people want to talk about. Attitudes are changing, from Silicon Valley, to London’s Silicon Roundabout.

The reason is clear: Artificial Intelligence (AI) techniques, such as machine learning are starting to have significant impact at societal level. Machine learning algorithms of different types are now widely used in commercial settings, from optimising parcel delivery, to reviewing job applications in the recruitment industry, to influencing what information we consume via social media. They are also now being widely adopted in the public sector. So much so, that the Mayor of New York City, Bill de Blasio, has just launched a task force reviewing how algorithms impact decision-making in New York’s public services.

While there has been a lot of debate in the media recently about the future of AI between optimists, such as Demis Hassabis of Google Deepmind, and equally famous pessimists, like Elon Musk of Tesla and SpaceX, this debate rather misses the point.

The debate deals with a future, hypothesised, generation of AI algorithms, which could modify their own source code, evolving beyond the original intention of the programmer (and then continue to learn exponentially until they take over the world, maybe). This is what is called Seed AI, a type of Artificial General Intelligence. The problem is that a stable Seed AI is considered by many to be impossible under currently formulated laws of mathematical logic due to what is known as the boot-strap paradox. For a great, though slightly technical, explanation of Seed AI and this paradox, see this great article by Yampolskiy (2015).

The key problem is that the focus on the coolness of Seed AI and Artificial General Intelligence in the media obscures a more important and much more pressing debate that needs to be had. Namely, that the current much dumber generation of AI that powers Amazon’s Alexa or Google’s image search, is being created with too little oversight and then deployed too fast. This Narrow AI may not be able to bring about the legendary Singularity but it is already affecting some sensitive areas of our lives in a way nobody intended.

One brewing problem is that Narrow AI algorithms are too often being trained on unrepresentative datasets, leading to biased results. For example, using unrepresentative datasets of images to train facial recognition AI leads to a whole range of machines, from automatic soap dispensers to airport e-Passport readers, that do not recognise dark pigment skin, as pointed out by Joy Buolamwini of MIT Media Labs.

Other notorious cases of unintended consequences of Narrow AI include algorithms committing all resources to affluent areas when prioritising road maintenance investment in Boston, as described by Barocas and Selbst (2016) in their great paper on the impact of algorithms and Big Data, or biases creeping in to the automation of sentencing and parole recommendations, influencing decisions over an individual’s freedom or continued incarceration, as analysed in this interesting New York Times editorial, or algorithms not offering same-day parcel deliveries to black-majority areas of key US cities but offering it to neighbouring white-majority areas, as explained in this Bloomberg article by Ingold and Soper (2016).

These examples touch on some deep social fault lines around race, gender, economic equity and social justice, all inadvertently aggravated by unthinking application of Narrow AI. And the above only deals with unintended consequences of algorithmic bias. To that issue can be added legitimate concerns about other aspects of the AI revolution, such as fake news, manipulation, monopoly power, lack of regard for privacy and issues with the addictiveness of the interfaces the algorithms hide behind. The list could go on, with plenty of grist for the mill of AI pessimists.

The AI optimists would reply that focusing only on what has gone wrong and what conceivably might go wrong is unfair, as it fails to account for the other side of the equation: the value when things go right and the untapped potential of future interventions. Indeed, it could be argued that not using the current generation of Narrow AI is itself unethical due to the improvements it can make.

This argument is particularly powerful when you consider the lives starting to be saved in healthcare already due, for example, to better identification of breast tumours by applying machine learning algorithms to mammograms. How many more lives could be saved if these isolated experiments could be scaled up across all of our health services? And how can we balance the opportunity costs of not doing this against the unintended consequences of poorly designed or deployed Narrow AI?

A neutral pragmatist might add that whichever side of the argument you choose to support, it is worth acknowledging that AI is one of the rare world-transforming technologies, like the printing press, gun powder or electricity, which simply cannot be kept at bay precisely because it has so much tempting potential. If America decided to stop development, China would continue. If China decided to cease, India would take up the competitive advantage. AI is the apple in our garden of Eden.

The real question is: how can we take account of the social implications of this technology to create AI algorithms in a more ethical and socially responsible way from the start? This is the big question we should all be grappling with. As I hope to demonstrate over the next few posts, social scientists have the tools to help answer it, but only if they work in close collaboration with software engineers and data scientists who are creating the algorithms in the first place.

Danil Mikhailov

Written by

Sociologist of technology, interested in social impact of AI. Currently working as Head of Data & Innovation at the Wellcome Trust.