The ethicist in the machine

Should machines be held to a higher standard than humans — at least when it comes to questions of data ethics and bias?

Ethics in automated decision-making is not so much machines vs humans as machines based on human inputs. And it seems like wealthy white males are often the visible face of the tech while other human labour is hidden. Trevithicks Dampfwagen by William Felton , via Wikimedia Commons

We got asked this question about machines vs humans last month when presenting on “Killer Robots and How not to do Data Science” at the Strata data conference in London. Fellow volunteer Kate Vang and I were talking about work that DataKind UK has been doing on developing and applying ethical principles in our pro bono data science projects with charities. We answered the question live, but I wanted to pull together my thoughts on this. (This is in the context of automated decision systems as being used right now — this is not about any future AI general superintelligences and their non-paperclip-maximising superethics!).

My one-word answer is still Yes — when it comes to what are sometimes called automated decisions, where rulings with social impact and potential ethical issues are output from algorithms, such as machine learning models or artificial intelligence, we do need to be more cautious. They are built on fallible human inputs but there are more unknowns, there’s the potential for greater harm, and the decision-making systems can be harder to challenge or overturn.

One way to think about it is the difference between cars and bikes — you can hurt people with both, but cars do more damage, to more people, more quickly. Cars’ power means that they magnify the consequences of any bad decisions that the humans nominally controlling them make — in a way that’s usually worse for the people outside than in. Drivers probably aren’t inherently less ethical than cyclists (notwithstanding some of my cycling experiences on London roads…), but we do apply more regulation and more restrictions to cars than we do to bikes.

Algorithmic decisions have more impact and spread further and faster

A human who has come to apply a biased decision-making technique (that men are inherently better at STEM subjects for example) applies it one “subject” at a time. That’s not great, but the harm is limited. Machines are intended to be efficient and “decide” things like job candidate shortlists far more quickly, impacting hundreds or thousands of people. Models may be adopted across from one jurisdiction or company to another (“transfer learning”) so that the impact spreads. We can also look at things that are only indirectly about ethics — like the “flash crash” trading events where algorithms all followed rules in a way that rapidly spread the initial damage by creating a feedback loop, far faster than humans could have done. As Virginia Eubanks writes: “When a very efficient technology is deployed against a scorned out-group in the absence of strong human rights protections, there is enormous potential for atrocity.”

In addition, setting up an automated decision-making system may lead to faster decisions per person but it often requires considerable up-front time and expense, unlike just hiring some more people. This leads to sunk-cost decision-making and political face-saving, where even algorithmic systems that failed their pilot phase are rolled out more broadly as it is either not acknowledged that failure occurred, or that’s excused as “teething troubles” that will be fixed with more data!

Models are data hungry — and discrimination hungry

One of the main reasons to use machines rather than humans is the supposedly impartial and formulaic approach to making decisions — the idea is that individuals with the same (relevant) characteristics should be treated identically in decisions about, say, benefits eligibility. Human decision-makers can be swayed by irrelevant factors such as race, or subtler things like how well-spoken someone is at an interview. But the downside of machine learning to make these decisions is that mostly historic “training” data is used — the human biases are encoded. And they are not only encoded but enhanced. Machine learning is well known to be data-hungry but it is also discrimination-hungry — any features or combinations of features (where someone lives + their job + their ethnicity) may have been used in human decisions with only a marginal impact. But algorithms will learn this and can deepen the distinction that shouldn’t exist, “over-fitting” to any biased human inputs in ways that can be very hard to remove or correct.

This poor taxidermied AI (ok, ai for the case-sensitive) is very much showing how incorporating the results of human best efforts can be problematic. And we wouldn’t trust it to make ethical decisions about who gets employed. By Esv — Eduard Solà Vázquez CC BY-SA 3.0, from Wikimedia Commons

Machines can’t do new

Machines are inherently limited in recognising something as a new circumstance where their model won’t hold. Humans, faced with an example that doesn’t fit their experience, can go back to some sort of overall aim or norm (“block posts that contain threats of violence”) even if it’s an example not seen before. Humans can learn from a single input — seeing one bad outcome can be enough to make you reconsider your methods entirely. Machines can be taught to weight more recent examples, but they are inherently based on a depth of evidence. That’s part of the deal — that accuracy can be improved by looking at lots of examples and this powerful machine memory can be stronger and more rational than human guessing. But this means that machines can do greater harm by applying that learning in cases where humans would decide there was some novel and crucial distinction.

Machines appear impartial

Humans can expect more of the machine decision-maker — they assume that it is impartial, all-knowing, and not subject to petty human discrimination. In the same way that people have held qualified experts to a higher standard than more casual opinion because we rely on them more (physicians can be sued for malpractice for a negligent misdiagnosis, but you wouldn’t sue your friend who offered an opinion on facebook) it makes sense that we will be less tolerant of errors for machine models. There’s evidence that people are reluctant to go against the recommendation of an algorithm even if it is meant to be only one aspect of a decision or when it disagrees with other expertise (as in the example mentioned in ProPublica’s COMPAS investigation where a judge used a defendant’s score even over the prosecutor’s recommendation). For more on this whole reliance on machines (and sometimes a lack of trust) I like this thesis by Jennifer Logg. This all covers the fact that the human biases come through without accountability — in Maciej Cegłowski’s memorable phrase, machine learning acts as “money laundering for bias”.

Machines are black boxes

Machines learn on historical data (containing biases from human decisions that are still classed as “correct”) and some models can be complex and hard to interpret. That means that if there is a decision with negative impact that you want to challenge it can be very hard to know what went wrong, how systemic an issue it is, and whether there’s inappropriate bias. For a human decision we have as a survival skill pretty good “bullshit detectors” and methods for challenging. These methods don’t always work, as seen by the long long history of systemic injustice around issues of race, gender, ability etc — but they do exist. This is not the case for machine models. If it’s harder to challenge and overturn a machine decision than that of a human, then there has to be even more consideration about its limitations in the first place.

The baseline for all this is of course humans (and their systems). And humans are not particularly good at making fair, impartial, just decisions. There’s a long history of bias on unaccepted grounds like race, there’s society-wide systemic injustices, there’s inconsistency because of psychological tendencies like the availability heuristic (we don’t consider all the evidence, just whatever we saw most recently), and there’s the ability to post-hoc rationalise so that the inappropriate reasons are hidden, sometimes even from ourselves.

One of our DataKind UK volunteer ethical principles to try and address this is to look not just at potential negative impacts of any data science modelling but at the status quo — we do think there’s a lot of possibility for machine learning and other data techniques to improve project outcomes, even with a higher barrier to acceptance

When it comes to making decisions or predictions that have ethical dimensions (which is vague as it’s also something data scientists should be thinking about — but maybe there being potential negative impacts on real humans is a good start), machine learning systems are ultimately dependent on human-supplied data to learn, but they can learn any biases from the supplied history, magnify them, and apply them more broadly and irrevocably than humans ever could. This potential for greater harm means that machine decisions need the ethical equivalent of driver’s licences, vehicle inspections and safety and environmental standards as they roll out to change society.