Awful Artificial Intelligence
A really scary decade full of awful A.I. applications
Artificial intelligence technology is one of the big scalers of this century. The promise of A.I. that is carried throughout media and society is tantalizing: A super-human machine, that never sleeps and is able to process enormous amounts of data with superior objective insight. No wonder that we are experiencing an unprecedented hype around intelligent apps— “Using A.I. to do X” applications are flourishing and startups, big corporations as well as governments are increasingly promising to solve society’s biggest problems with data-driven insight derived from artificial intelligence. Unfortunately, as the decade is ending, mankind has also scaled existing threats and biases with A.I. — many of them due to our limited understanding of what artificial intelligence is actually capable of (and what it is not).
The goal of this primer is not to prevent anyone from starting their own world-changing A.I. startup — but to rather advocate for a mindful and ethical approach to it. In the following, we will discuss the limits of A.I. and list some really awful A.I. applications that crossed ethical boundaries.
This article is based on the Awful AI project, a curated and living document on GitHub that tracks scary usages of AI — in the hope to raise awareness of unethical artificial intelligence.
I tweet new blog posts @dwddao on Twitter.
Artificial Intelligence: The Bad Parts
If you look under the hood, A.I. technology is not as shiny as it seems. Artificial intelligence in its current state is unfair, easily susceptible to attacks and notoriously difficult to control. When developing novel intelligent applications, we need to be mindful of two things:
- Technical limitations. Machine learning models are only as good as the training data they are provided with. Models can easily overfit to unwanted features in data. Even worse, data is not as objective as one would assume. In fact, data is often collected and labeled by individuals or organizations, thus it often reflects their human biases.
- Ethical limitations. Even when society is able to develop amazing facial recognition software or perfect news-writing bots, we need to understand the ethical consequences of these developments and how technology can be misused in e.g. surveillance or fake news.
Dangerous Impact Areas of Artificial Intelligence
Unfortunately, this decade experienced the rise of numerous awful intelligent applications that often don’t consider any of the aforementioned limitations. Here is a list of four dangerous impact areas that have been scaled by the rise of artificial intelligence.
Impact on Discrimination
Existing artificial intelligence applications that are used by large tech companies and governments have been shown to suffer from scalable discrimination.
Racist Bots & Image Recognition—Google’s image recognition program labeled the faces of several black people as gorillas. Amazon’s Rekognition labeled darker-skinned women as men 31 percent of the time. Lighter-skinned women were misidentified 7 percent of the time. Currently, Rekognition is deployed in the Washington County Sheriff Office in Oregon, speeding up how long it took to identify suspects from hundreds of thousands of photo records. In 2016, a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.
Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter
Microsoft's attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with…
Sexist Recruiting — AI-based recruiting tools such as HireVue or an Amazon internal software, scans various features such as video or voice data of job applicants and their CVs in order to tell whether they’re worth hiring. In the case of Amazon, the algorithm quickly taught itself to prefer male candidates over female ones, penalizing CVs that included the word “women’s,” such as “women’s chess club captain.”
Got a Poker Face? Employers are Using AI to Analyze Candidates' Facial Expressions and…
Unilever, IBM, Dunkin Donuts and many others already use this technology. Have you ever lied during a job interview…
Racist Police Software — COMPAS is a risk assessment algorithm used in legal courts by the state of Wisconsin to predict the risk of recidivism. Its manufacturer refuses to disclose the proprietary algorithm and only the final risk assessment score is known. The algorithm is biased against blacks. Even worse, PredPol, a program for police departments that predicts hotspots where future crime might occur, can get stuck in a feedback loop of over-policing majority black and brown neighborhoods — causing the police to discriminate minorities in certain districts.
Machine Bias - ProPublica
On a spring afternoon in 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an…
Scary New Discrimination — Advancements in the development of recognition and prediction software have also led to new scary possibilities of discrimination.
- AI-based Gaydar — Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.
- Infer Genetic Disease From Your Face — DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient’s face. This could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications.
- Persecuting ethnic minorities — Chinese start-ups have built algorithms that allow the government of the People’s Republic of China to automatically track Uyghur people.
New AI can work out whether you're gay or straight from a photograph
Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces…
One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority
In a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track…
Impact on Influencing, Disinformation, and Fakes
Artificial Intelligence now has the ability to create synthetic content that is indistinguishable from real content — opening up new ways of large-scale influencing. This technology has proven to be a huge threat to our democracy.
- Cambridge Analytica — Cambridge Analytica uses Facebook data to change audience behavior for political causes, possibly influencing the results for Brexit and the U.S. presidential election of 2016.
- Deep Fakes — Deep Fakes is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos. Deepfakes may be used to create fake celebrity pornographic videos and revenge porn or scam businesses
- Fake News Bots — Automated accounts are being programmed to spread fake news. In recent times, fake news has been used to manipulate stock markets, make people choose dangerous health-care options, and manipulate elections, including the 2016 US presidential election.
How A.I. Could Be Weaponized to Spread Disinformation
In 2017, an online disinformation campaign spread against the "White Helmets," claiming that the group of aid…
Inside the Pentagon's race against deepfake videos
Advances in artificial intelligence could soon make creating convincing fake audio and video - known as "deepfakes" …
Impact on Surveillance and Social Credit System
Autocratic governments have been pushing the limits of intelligent content classification and recognition software for surveillance. Many breakthroughs in A.I. now allow surveillance to be scaled in real-time to every digital and physical footprint. Surveillance is not just limited to facial recognition.
- Gait Analysis — Your gait, the way you walk, is highly complex and very much unique and hard, if not impossible, to mask in the era of CCTV. Your gait only needs to be recorded once and associated with your identity, for you to be tracked in real-time. In China, multiple people have been convicted on their gait alone.
- Generating Faces from Voices — Given an audio clip spoken by an unseen person, A.I can now picture a face that has as many common elements, or associations as possible with the speaker, in terms of identity. Intelligent surveillance is then able to generate faces that match several biometric characteristics of the speaker.
China's New Frontiers in Dystopian Tech
The system seems to be working: Since last May, the number of jaywalking violations at one of Jinan's major…
How WeChat censors private conversations, automatically in real time
Based in China and boasting over 1.1 billion global users, it's one of the world's most advanced and popular apps…
- Digital recognition and identification also allow artificial intelligence applications to determine real-world consequences for those who do not follow the rules. Social credit systems assign each citizen automated points and enable certain rights depending on their score. It is currently piloted as an incentive and punishment system in China.
China's 'social credit' system bans millions from travelling
Mr Zhang said that the NDRC will "increase the intensity of joint rewards and punishments so that dishonest people will…
Impact on Autonomous Weapon Systems
Lethal Autonomous weapon systems (LAWS) locate, select, and engage targets without human intervention. They include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city using facial recognition.
Known current autonomous weapons projects include:
- Automated machine gun — The Kalashnikov group presented an automatic weapon control station using AI that provides the operator with automatic recognition and target illumination and automatic tracking of ground, air and sea targets. Samsung developed and deployed SGR-A1, a robot sentry gun, which uses voice recognition and tracking.
- Armed UAVs — Ziyan UAV develops armed autonomous drones with light machine guns and explosives that can act in swarms.
- Autonomous Tanks — Uran-9 is an autonomous tank, developed by Russia, that was tested in the Syrian Civil War.
Ex-Google worker fears 'killer robots' could cause mass atrocities
A new generation of autonomous weapons or "killer robots" could accidentally start a war or cause mass atrocities, a…
“With great power comes great responsibility”
This quote by Spiderman’s Uncle Ben has rarely been truer. In order to prevent awful A.I. applications from appearing in the wild, we need to all work together as a community.
As engineers and technologists, we need to be mindful of the applications we develop. It is important to consider ethics and social guidelines in artificial intelligence research as A.I. has the potential to change the world as we know it. Thus our main scientific venues such as NeurIPS need to have an ethical review committee that decides on research directions and moratoriums (similar to what the biological science community has established for CRISPR/embryo research). Furthermore, we need to encourage research that actively prevents the misuse of data. Data ownership research projects such as Kara are exploring ways to limit the development of A.I. applications without the consent of data owners.
As policymakers, it is important to keep up with the speed of technological advances and design regulations that can foster the safe development of A.I.
As business leaders, we need to make sure to not fall for the hype of almighty intelligent robots. A.I. is not a silver bullet that can be used to solve all of the world’s problems but rather a tool that can scale and provide novel solutions but also, at the same time, reinforce existing problems (such as discrimination).
I’ll try to keep track of more awful use-cases of A.I. in 2020 onwards, hoping to raise awareness of its dangers. If you want to learn more or have ideas on how to make the list better, please feel free to get in touch!