We Need To Stop Intelligent Machines Repeating The Mistakes Of Human History
The age of intelligent machines promises so much. Automation to free us from the shackles of work, and prosperity driven by logic and algorithms, could be a utopia.
But the vision is being delayed by an insidious side effect of the automation age that we never anticipated: our machines are learning to be biased and prejudiced, just like humans.
There are already many documented instances of questionable behavior occurring with artificial intelligence.
Take Amazon’s now-defunct AI recruitment tool that decreed male candidates were more desirable than females for tech jobs, and rated any instance of the word ‘woman’ or ‘women’s’ in a résumé as a negative factor. This is because it was spotting and repeating patterns in historical data dating back 10 years, with applications for jobs in software and development coming mostly from men.
Similarly, a University of Virginia computer science professor found the software he was building was actually amplifying unconscious human gender bias based on the photo datasets it was ‘learning’ from, which also showed bias in their depiction of activities by associating women with cooking and men with sports.
This disturbing trend continues. An investigation by ProPublica — an independent, non-profit investigative journalism newsroom — highlights a court risk assessment software program that predicted black defendants who did not go on to re-offend at a far higher risk of reoffending (44.9%) compared to white defendants (23.5%) who didn’t re-offend. And predictive policing programs have led to machines recommending over-policing neighborhoods with high numbers of minorities.
The problem, of course, is not machines. It’s the fact they are learning from and processing already biased data that reflects the way society has historically marginalized people.
When humans feed data into machines, the intentions are usually good. Finding crime hotspots so police can focus their resources on those areas, or so financial institutions can establish who can reliably repay their loans, or so employers can find candidates who have the skills that are the best fit for certain jobs are all sensible ideas.
But machines do not think as we do. Their circuits run cold. They just produce results based on the data they are given, so automation reinforces human biases.
The humorist Evan Esar once said: “Computers can figure out all kinds of problems, except the things in this world that just don’t add up.”
Robots are not racist or sexist: the problem is reality. If we want to continue to use big data and machine learning in the workplace and beyond, we need to re-address how we collected data in the first place. The problem with machine learning is who they are learning from.
We even attach gender qualities to bodiless machines: virtual assistants like Alexa and Siri bear names normally given to women. And that plays into our culture. Why does male-embodied AI tend to be powerful and violent, like Ultron, HAL 9000 and The Terminator, while female-embodied AI tends to be servants or objects of affection like Samantha from Her, or the Stepford Wives?
How do we fix this?
Our machines will continue to be machines. Now is the moment in human history when we have the opportunity to re-examine our data and put ourselves on a path to a brighter future. This isn’t a naive concept. Flexible workforces, shorter workweeks and AI cutting down our hours will fundamentally change the way humans live and work — and this is already happening.
The AI Now Institute at New York University, which researches the social implications of artificial intelligence technologies, is pushing for systematic and structural biases that impact these programs before they are implemented, as outlined in a 2018 report. It cites examples of people’s access to healthcare, housing, and employment being affected by machine-based decision making.
The AI Now Institute wants communities and groups that will be affected by decision-making systems to be consulted so their concerns are accounted for, and for creators of such programs to be transparent and waive trade secrecy and other legal claims that would prevent algorithmic accountability in the public sector.
Another group — a research team at the Alan Turing Institute in London and the University of Oxford — have called for a third-party AI watchdog that can scrutinize algorithms in cases where people feel they have been discriminated against by automated computer systems.
Note these are not solutions for the future, they are needed right now, or technology designed to move us forward risks moving us decades back.
This article was originally published on Forbes