Artificial intelligence: how it impacts human rights and what we should do about it

Sherif Elsayed-Ali
11 min readMay 25, 2017

--

It was a pleasant 21 degrees in New York when computers defeated humanity. Or so many people thought.

That Sunday in May 1997, Gary Kasparov, a prodigal chess Grand Master and world champion was beaten by Deep Blue, a rather unassuming black rectangular computer developed by IBM. In the popular imagination, it seemed like humanity had crossed a threshold — a machine had defeated one of the most intelligent people on the planet at one of the most intellectually challenging games we know. The age of AI was upon us.

Or perhaps not.

What’s artificial intelligence?

While Deep Blue was certainly an impressive piece of technology, it was no more than a supercharged calculating machine. It had no intelligence to speak of beyond its ability to play chess. It was very, very good at playing chess but was absolutely hopeless at anything else. Deep Blue is what’s called an artificial narrow intelligence (ANI) — it’s a machine that’s very good at doing one specific thing. We’ve had them for decades: if you went to school in the 80s or 90s, you probably had a pocket calculator like this:

The calculator has, in it’s own way, a very rudimentary form of artificial intelligence. It excels at making complex calculations in a matter of seconds that would take the average person minutes or even hours. ANI is everywhere around us, from Siri on iPhones to Google’s suggested searches, from the GPS in cars to Nest thermostats. These are today much more sophisticated AIs than a pocket calculator, but they still have a very narrow range of capabilities.

In the past couple of years, you probably heard of toothbrushes with artificial intelligence, AI-powered dolls and other consumer products that have been endowed with the magic of AI, becoming better and smarter. The truth is that most of it is a marketing fad. These so-called AI features don’t add any value: a regular toothbrush is perfectly adequate for cleaning your teeth.

But look beyond the gimmicks, and you will find that the increase in the power of AI applications has been explosive, with vast consequences for the world. This evolution of AI applications has been enable by the technological revolution that preceded it: data.

The tremendous amounts of data generated by the internet, mobile phones, connected systems and sensors (in travel, health, logistics, traffic, electricity networks, etc..) has supercharged a certain type of AI technology called deep learning. Deep learning is used by machines to analyse very large amounts of data to look for patterns and find meaning. The machine is “trained” on a large data set and the more data it has, the more it refines its results.

This may sound abstract but the real-life applications are huge: by analyzing very large amounts of health records with AI technology, doctors can improve diagnostics; with data from cars, phones and urban sensors, cities can optimize traffic, reducing pollution and travel times; by analyzing demand on its servers and changes in temperatures, a company can save millions of dollars on cooling and electricity in its data centers, simultaneously reducing its costs and environmental impact; by analyzing satellite data, countries can anticipate crop shortages and predict deforestation. This is all possible because of computers’ ability to process very large data sets and make sense of them, a task well beyond human ability.

But we are nowhere near the AIs in science fiction movies. The super smart AIs, depicted sometimes as good (Data in Star Trek, the Oracle in the Matrix) or bad (Skynet in Terminator or every other AI in the Matrix), are Artificial General Intelligence (AGI) — that is AI that is as intelligent, or more intelligent than humans at the variety of tasks that humans can do, and beyond.

To have AGI, a machine would need to be able to fluently speak and understand human language, come up with a dinner recipe with a few ingredients, look at a map and find an efficient path from point A to point B, engage in a meaningful conversation with a stranger, and watch a silent movie and be able to describe what’s happening in it. We still firmly live in a world of ANIs, extremely efficient programs that are very good at doing one thing. Experts disagree on whether AGI could be achieved in the next 10, 50, 100 years, or ever — but they all agree it will be hard to get there.

The good, the bad, and the ambiguous

Like any technology, AI has good and harmful applications. AI that helps reduce power consumption in a data center will almost always have a positive social impact. An autonomous swarm of armed military drones is unlikely to help humanity much (even if you don’t think autonomous weapons are an inherently bad idea, remember Murphy’s Law. Things will go wrong.)

In between these two lie most AI applications: they can have positive or negative impacts on human rights, depending on how they are developed and used. Let’s look at a real life example: predictive policing.

it already exists in some countries. Various USA and UK police forces use software to that tries to predict when and where crime might be committed so they can allocate more resources to crime hotspots. In theory, that’s a good idea. The police, like all organisations, have to make choices and prioritize the use of their resources — and having police officers ready to respond somewhere where a crime is going to occur is certainly better than having them patrolling the other end of the city. Or is it?

We may not have Precogs like in Minority Report, but predictive policing is already here

The problem starts even before a crime prediction software is used — it starts with the problems still plaguing the data revolution: bias.

Here is a scenario of how an existing problem can quickly get much worse. While it’s hypothetical, it’s representative of real world conditions:

A city wants to introduce predictive software to help tackle crime rates and make better use of its frontline police force. Crime has been high on the political agenda and the policing budget has been cut in real terms due to ongoing austerity measures. Introducing predictive policing is attractive: it’s new, appears to be innovative and high-tech, and brings the promise of optimizing resource use AND reducing crime rates, all for a relatively affordable price. The appeal is obvious.

The police force contracts a tech company that’s spent years developing predictive policing software, although they’ve never worked in this city. The first thing the company needs is a lot of historical data on crime in order to train their algorithm to start making predictions. They ask the police force, which supplies them with 15 years of data on arrests and crime, classified by type of crime, date and time, location, conviction rates, among other related data.

The company uses the data and the algorithm starts churning out predictions. Predictably (sorry), it directs police forces to existing crime hotspots, but because it does so more systematically and predicts timing of crime better, it leads to higher rates of crime detection, arrests and convictions. The first pilot is a success and political leaders are pleased. The police force invests in a multi-year contract with the company and neighbouring police forces start to take notice. Predictive policing is the flavour of the day.

The data problem

But here’s the catch: the city has a history of over policing of certain ethnic and religious minorities and inner city areas. Allegations of discrimination in policing and sentencing have existed for decades and in recent years several authoritative studies have shown that ethnic minorities are more likely to be arrested and, on average, have higher conviction rates. Efforts to deal with this issue have had some, but limited successes. Politicians and police leadership have said that tackling the problem was a priority and, in fact, the move to predictive policing was seen as a way of removing human bias — after all algorithms do not have feelings, set ideas or human bias.

But the algorithms used biased data. The areas where higher crimes were recorded in the past also happened to be parts of the city with a higher concentration of ethnic and religious minorities. The algorithms started predicting more crime in these areas, dispatching more frontline police officers, who made more arrests. The new data was fed back into the algorithm, reinforcing its decision making process. Its first predictions turned out to be accurate, as indeed there was crime to be stopped, which made it refine its focus, continuing to send a disproportionate amount of police resources to these parts of the city. The resulting higher crime detection, arrests and convictions in fact mask increasingly discriminatory practices.

The result is a feedback loop that can only be escaped if the historical and ongoing bias is corrected.

Such data bias and discriminatory automated decision making problems can arise in numerous other current and potential AI applications. To name a few: decisions on health insurance coverage, mortgage and loan applications, shortlisting for jobs, student admissions, and decisions on parole and sentencing. The effect of discriminatory AI on human rights can be wide-range and devastating for individuals.

The transparency problem

A major problem with the use of AI in automated decision making is the lack of transparency. This is because deep learning, which has exploded in importance in the last few years, often uses neural networks, the digital equivalent of a brain. In neural networks, millions of computational units are stacked in dozens of layers that process large data sets and come up with predictions and decisions. They are, by their nature, opaque: it’s not possible to pinpoint how the AI came up with a specific output.

This is a serious problem for accountability. If you can’t figure out why mistakes have happened, you can’t correct them. If you can’t audit decisions, you can’t find problematic outcomes that would otherwise remain hidden. While financial audits help reduce accounting errors and financial misconduct, you can’t do the same with a deep-learning AI. It’s a black box.

On the upside, many AI scientists, companies and policy makers take this problem seriously and there are various attempts to develop explainable AI. This is how DARPA, the US’s Defense Advanced Research Projects Agency describes the aims of its explainable AI program:

Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and

Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

Source: DARPA

Human rights solutions

Here, I outline a few potential ways to tackle some of the human rights challenges that the use of AI poses.

1. Correcting for bias: if we know that data that will be fed into an AI system carries a risk of bias, then we should first correct for it. The first part is recognizing there is bias — this is often not something that people responsible for the data will readily admit, either because they don’t believe it is biased or because it would be embarrassing to admit it.

Either way, correcting for bias should not be optional: it needs to be mandatory in any AI system that affects individual rights and relies on data on individuals. The first step is testing the data sets for bias: racial, religious, gender and other common biases should be routinely tested for, as well as more use-specific biases — for example educational achievement, location, profession, et.. that might be particularly relevant in specific systems. The second step is to correct for the bias — this could be complicated and time consuming, but necessary to prevent AI systems becoming an enabler of discriminatory practices.

2. Making use of AI accountable: we should expect the same accountability from an institution when it uses AI as when it uses a human worker. For example, if a government department green lights a large infrastructure project and half way through it turns out that costs were significantly overrunning, we would be able to go back and look at how the decision was made. Were the proper checks made? Was there an adequate bidding process ? Were costs calculated accurately? If costing is identified as the problem, we can go to the team that made the calculations and audit the process to find where the errors came from. We can identify if there were errors in the formulas used or assumptions made and would be able to correct them.

With AI , the transparency problem means that automated decision-making cannot be interrogated in the same way. This should not affect institutional accountability: a company or public institution using AI that makes discriminatory decisions that impact individual rights should be responsible for remedying any harm. They should regularly audit decisions for signs of discriminatory behaviour.

With AI as with other digital technologies, developers also have a responsibility to respect human rights — they must ensure their technology is not inherently discriminatory and that they do not sell it to users who could use it for purposes that are discriminatory, or otherwise harmful to human rights.

3. Not using AI where there is a risk of harm and no effective means of accountability: this applies to AI applications that may have a direct impact on the rights of individuals and are not inherently harmful (an example of inherently harmful applications are fully autonomous weapons systems. Giving robots the ability to autonomously and deliberately kill people or destroy infrastructure is an insanely bad idea so I am parking that to one side.)

Whether it’s predicting crime or approving mortgages, if decisions are made by an AI system that doesn’t have an effective means for accountability (as in point 2 above), it shouldn’t be used. Is this too radical? Hardly; we would not accept accounting systems that do now allow auditing or judicial systems that don’t allow for appeals or judicial reviews. Transparency and accountability are essential for respecting human rights.

I am not advocating that all AI applications should pass this test. Many commercial and non-commercial AI applications will not need to because their impact on individual rights is either remote or negligible.

Many of the issues I highlighted - data bias, accountability and others - are similarly applicable to automated systems that don’t use deep learning AI, with the key difference that these systems are, at least theoretically, more transparent.

These are neither comprehensive nor tested solutions to the problems that the use of AI poses for the protection of human rights; my aim is to highlight some of the challenges and possible solutions — and start a conversation!

Please agree, disagree and feel free to correct any mistakes in the comments.

Sources and references: I’ve included links to sources from which I used specific information or that explain some of the concepts further. There is a ton of great things to read on AI out there, I’ll recommend a couple if you want to dig a bit deeper:

  • Wait But Why’s 2-part in-depth piece on AI is great popular science writing on the science and philosophical questions around AI.
  • Cathy O’Neil’s book Weapons of Math Destruction is a fantastic investigation of big data and algorithmic decision making.

--

--

Sherif Elsayed-Ali

Born in Paris, grew up in Cairo and live in London. Dad of two. Liberal, occasional tree hugger. I work on tech for the climate and human rights at Element AI.