Awful Artificial Intelligence

A really scary decade full of awful A.I. applications

Artificial Intelligence: The Bad Parts

If you look under the hood, A.I. technology is not as shiny as it seems. Artificial intelligence in its current state is unfair, easily susceptible to attacks and notoriously difficult to control. When developing novel intelligent applications, we need to be mindful of two things:

  • Technical limitations. Machine learning models are only as good as the training data they are provided with. Models can easily overfit to unwanted features in data. Even worse, data is not as objective as one would assume. In fact, data is often collected and labeled by individuals or organizations, thus it often reflects their human biases.
  • Ethical limitations. Even when society is able to develop amazing facial recognition software or perfect news-writing bots, we need to understand the ethical consequences of these developments and how technology can be misused in e.g. surveillance or fake news.

Dangerous Impact Areas of Artificial Intelligence

Unfortunately, this decade experienced the rise of numerous awful intelligent applications that often don’t consider any of the aforementioned limitations. Here is a list of four dangerous impact areas that have been scaled by the rise of artificial intelligence.

Impact on Discrimination

Existing artificial intelligence applications that are used by large tech companies and governments have been shown to suffer from scalable discrimination.

  • AI-based Gaydar — Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.
  • Infer Genetic Disease From Your Face — DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient’s face. This could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications.
  • Persecuting ethnic minorities — Chinese start-ups have built algorithms that allow the government of the People’s Republic of China to automatically track Uyghur people.

Impact on Influencing, Disinformation, and Fakes

Artificial Intelligence now has the ability to create synthetic content that is indistinguishable from real content — opening up new ways of large-scale influencing. This technology has proven to be a huge threat to our democracy.

  • Cambridge Analytica — Cambridge Analytica uses Facebook data to change audience behavior for political causes, possibly influencing the results for Brexit and the U.S. presidential election of 2016.
  • Deep Fakes — Deep Fakes is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos. Deepfakes may be used to create fake celebrity pornographic videos and revenge porn or scam businesses
  • Fake News Bots — Automated accounts are being programmed to spread fake news. In recent times, fake news has been used to manipulate stock markets, make people choose dangerous health-care options, and manipulate elections, including the 2016 US presidential election.

Impact on Surveillance and Social Credit System

Autocratic governments have been pushing the limits of intelligent content classification and recognition software for surveillance. Many breakthroughs in A.I. now allow surveillance to be scaled in real-time to every digital and physical footprint. Surveillance is not just limited to facial recognition.

  • Gait Analysis — Your gait, the way you walk, is highly complex and very much unique and hard, if not impossible, to mask in the era of CCTV. Your gait only needs to be recorded once and associated with your identity, for you to be tracked in real-time. In China, multiple people have been convicted on their gait alone.
  • Generating Faces from Voices — Given an audio clip spoken by an unseen person, A.I can now picture a face that has as many common elements, or associations as possible with the speaker, in terms of identity. Intelligent surveillance is then able to generate faces that match several biometric characteristics of the speaker.
  • Digital recognition and identification also allow artificial intelligence applications to determine real-world consequences for those who do not follow the rules. Social credit systems assign each citizen automated points and enable certain rights depending on their score. It is currently piloted as an incentive and punishment system in China.

Impact on Autonomous Weapon Systems

Lethal Autonomous weapon systems (LAWS) locate, select, and engage targets without human intervention. They include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city using facial recognition.

  • Automated machine gun — The Kalashnikov group presented an automatic weapon control station using AI that provides the operator with automatic recognition and target illumination and automatic tracking of ground, air and sea targets. Samsung developed and deployed SGR-A1, a robot sentry gun, which uses voice recognition and tracking.
  • Armed UAVs — Ziyan UAV develops armed autonomous drones with light machine guns and explosives that can act in swarms.
  • Autonomous Tanks — Uran-9 is an autonomous tank, developed by Russia, that was tested in the Syrian Civil War.


“With great power comes great responsibility”

This quote by Spiderman’s Uncle Ben has rarely been truer. In order to prevent awful A.I. applications from appearing in the wild, we need to all work together as a community.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
David Dao

PhD student in AI and Data Systems for Sustainable Development 🌱🛰️🌍 | Founder | Past: Stanford, Berkeley, MIT