Awful Artificial Intelligence

A really scary decade full of awful A.I. applications

David Dao
8 min readDec 30, 2019

Artificial intelligence technology is one of the big scalers of this century. The promise of A.I. that is carried throughout media and society is tantalizing: A super-human machine, that never sleeps and is able to process enormous amounts of data with superior objective insight. No wonder that we are experiencing an unprecedented hype around intelligent apps— “Using A.I. to do X” applications are flourishing and startups, big corporations as well as governments are increasingly promising to solve society’s biggest problems with data-driven insight derived from artificial intelligence. Unfortunately, as the decade is ending, mankind has also scaled existing threats and biases with A.I. — many of them due to our limited understanding of what artificial intelligence is actually capable of (and what it is not).

The goal of this primer is not to prevent anyone from starting their own world-changing A.I. startup — but to rather advocate for a mindful and ethical approach to it. In the following, we will discuss the limits of A.I. and list some really awful A.I. applications that crossed ethical boundaries.

This article is based on the Awful AI project, a curated and living document on GitHub that tracks scary usages of AI — in the hope to raise awareness of unethical artificial intelligence.

I tweet new blog posts @dwddao on Twitter.

Artificial Intelligence: The Bad Parts

If you look under the hood, A.I. technology is not as shiny as it seems. Artificial intelligence in its current state is unfair, easily susceptible to attacks and notoriously difficult to control. When developing novel intelligent applications, we need to be mindful of two things:

  • Technical limitations. Machine learning models are only as good as the training data they are provided with. Models can easily overfit to unwanted features in data. Even worse, data is not as objective as one would assume. In fact, data is often collected and labeled by individuals or organizations, thus it often reflects their human biases.
  • Ethical limitations. Even when society is able to develop amazing facial recognition software or perfect news-writing bots, we need to understand the ethical consequences of these developments and how technology can be misused in e.g. surveillance or fake news.

Dangerous Impact Areas of Artificial Intelligence

Unfortunately, this decade experienced the rise of numerous awful intelligent applications that often don’t consider any of the aforementioned limitations. Here is a list of four dangerous impact areas that have been scaled by the rise of artificial intelligence.

Impact on Discrimination

Existing artificial intelligence applications that are used by large tech companies and governments have been shown to suffer from scalable discrimination.

Racist Bots & Image Recognition—Google’s image recognition program labeled the faces of several black people as gorillas. Amazon’s Rekognition labeled darker-skinned women as men 31 percent of the time. Lighter-skinned women were misidentified 7 percent of the time. Currently, Rekognition is deployed in the Washington County Sheriff Office in Oregon, speeding up how long it took to identify suspects from hundreds of thousands of photo records. In 2016, a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.

Sexist Recruiting — AI-based recruiting tools such as HireVue or an Amazon internal software, scans various features such as video or voice data of job applicants and their CVs in order to tell whether they’re worth hiring. In the case of Amazon, the algorithm quickly taught itself to prefer male candidates over female ones, penalizing CVs that included the word “women’s,” such as “women’s chess club captain.”

Racist Police Software COMPAS is a risk assessment algorithm used in legal courts by the state of Wisconsin to predict the risk of recidivism. Its manufacturer refuses to disclose the proprietary algorithm and only the final risk assessment score is known. The algorithm is biased against blacks. Even worse, PredPol, a program for police departments that predicts hotspots where future crime might occur, can get stuck in a feedback loop of over-policing majority black and brown neighborhoods — causing the police to discriminate minorities in certain districts.

Scary New Discrimination — Advancements in the development of recognition and prediction software have also led to new scary possibilities of discrimination.

  • AI-based Gaydar — Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.
  • Infer Genetic Disease From Your Face — DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient’s face. This could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications.
  • Persecuting ethnic minorities — Chinese start-ups have built algorithms that allow the government of the People’s Republic of China to automatically track Uyghur people.

Impact on Influencing, Disinformation, and Fakes

Artificial Intelligence now has the ability to create synthetic content that is indistinguishable from real content — opening up new ways of large-scale influencing. This technology has proven to be a huge threat to our democracy.

  • Cambridge Analytica — Cambridge Analytica uses Facebook data to change audience behavior for political causes, possibly influencing the results for Brexit and the U.S. presidential election of 2016.
  • Deep Fakes — Deep Fakes is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos. Deepfakes may be used to create fake celebrity pornographic videos and revenge porn or scam businesses
  • Fake News Bots — Automated accounts are being programmed to spread fake news. In recent times, fake news has been used to manipulate stock markets, make people choose dangerous health-care options, and manipulate elections, including the 2016 US presidential election.

Impact on Surveillance and Social Credit System

Autocratic governments have been pushing the limits of intelligent content classification and recognition software for surveillance. Many breakthroughs in A.I. now allow surveillance to be scaled in real-time to every digital and physical footprint. Surveillance is not just limited to facial recognition.

  • Gait Analysis — Your gait, the way you walk, is highly complex and very much unique and hard, if not impossible, to mask in the era of CCTV. Your gait only needs to be recorded once and associated with your identity, for you to be tracked in real-time. In China, multiple people have been convicted on their gait alone.
  • Generating Faces from Voices — Given an audio clip spoken by an unseen person, A.I can now picture a face that has as many common elements, or associations as possible with the speaker, in terms of identity. Intelligent surveillance is then able to generate faces that match several biometric characteristics of the speaker.
  • Digital recognition and identification also allow artificial intelligence applications to determine real-world consequences for those who do not follow the rules. Social credit systems assign each citizen automated points and enable certain rights depending on their score. It is currently piloted as an incentive and punishment system in China.

Impact on Autonomous Weapon Systems

Lethal Autonomous weapon systems (LAWS) locate, select, and engage targets without human intervention. They include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city using facial recognition.

Known current autonomous weapons projects include:

  • Automated machine gun — The Kalashnikov group presented an automatic weapon control station using AI that provides the operator with automatic recognition and target illumination and automatic tracking of ground, air and sea targets. Samsung developed and deployed SGR-A1, a robot sentry gun, which uses voice recognition and tracking.
  • Armed UAVs — Ziyan UAV develops armed autonomous drones with light machine guns and explosives that can act in swarms.
  • Autonomous Tanks — Uran-9 is an autonomous tank, developed by Russia, that was tested in the Syrian Civil War.

Conclusion

“With great power comes great responsibility”

This quote by Spiderman’s Uncle Ben has rarely been truer. In order to prevent awful A.I. applications from appearing in the wild, we need to all work together as a community.

As engineers and technologists, we need to be mindful of the applications we develop. It is important to consider ethics and social guidelines in artificial intelligence research as A.I. has the potential to change the world as we know it. Thus our main scientific venues such as NeurIPS need to have an ethical review committee that decides on research directions and moratoriums (similar to what the biological science community has established for CRISPR/embryo research). Furthermore, we need to encourage research that actively prevents the misuse of data. Data ownership research projects such as Kara are exploring ways to limit the development of A.I. applications without the consent of data owners.

As policymakers, it is important to keep up with the speed of technological advances and design regulations that can foster the safe development of A.I.

As business leaders, we need to make sure to not fall for the hype of almighty intelligent robots. A.I. is not a silver bullet that can be used to solve all of the world’s problems but rather a tool that can scale and provide novel solutions but also, at the same time, reinforce existing problems (such as discrimination).

I’ll try to keep track of more awful use-cases of A.I. in 2020 onwards, hoping to raise awareness of its dangers. If you want to learn more or have ideas on how to make the list better, please feel free to get in touch!

--

--

David Dao

PhD student in AI and Data Systems for Sustainable Development 🌱🛰️🌍 | Founder GainForest.app | Past: Stanford, Berkeley, MIT https://daviddao.org