We must talk about ‘deepfake’, the technology, and its moral implications in the classroom

ReadyAI.org
ReadyAI.org
Published in
4 min readApr 21, 2021

By Roozbeh Aliabadi

In January 2021, Donald Trump belatedly admitted defeat in the US presidential election. Many news reports asked the key question: whether Mr. Trump’s speech had happened at all.

Today we are witnessing a radical proliferation of deep fakes — online imagery that can get anybody to appear to do or say anything within the boundaries of one’s imagination, cruelty, or cunning — has started to weaken faith in our ability to perceive reality and truth.

What is deepfake?

Deepfake refers to a particular kind of synthetic media where a person in an illustration or video is exchanged with another person’s representation. The actual term “deepfake” was first introduced in late 2017 by a Reddit user of the same name. Since initially introduced, the term has extended to include: synthetic media applications that existed before the Reddit page and new creations like Style GAN.

Advancements in technology have astounded observers and made anyone with a smartphone and access to an app like Avatarify capable of an adequate version.

In fact, the number of deepfake videos online surged from 14,678 in 2019 to 145,277 by June of 2020. In March 2020, the FBI warned that “malicious actors’’ will likely use deepfakes in the US for foreign influence operations and criminal activity soon. All around the world, there are growing concerns that technology will increasingly become a source of disinformation, division, deception, and theft. There are plenty of high-profile examples in recent years of deepfake. For instance, Myanmar’s junta recently posted a video of someone incriminating the country’s detained civilian leader; it was widely dismissed as a deepfake. Just last year, the prime minister of Belgian’s remarks linking COVID to climate change turned out to be a deepfake, and Indian politicians’ use of the technology for campaigning caused alarm. And in Gabon, the belief that a video of a country’s ailing president was a deepfake caused a national crisis in 2019.

Deepfake is new, but manipulating images to alter public perception for political reasons dates back to Joseph Stalin — who famously deleted purged comrades from official photos. Stalin didn’t have deepfake technology or Photoshop, but it didn’t keep him from clearing the traces of his adversaries from the history books. Even the famous photo of Soviet soldiers raising the flag after the battle of Berlin was altered. Stalin did indeed show that manipulating pictures isn’t always about the size of one’s ears’ noses. It can be a way of literally deleting today’s political enemies from tomorrow’s picture of history — and making the future unpredictable as a present filled with propaganda and lies.

Some argue that the threat of deepfake itself is overhyped. The real problem is those bad actors can now dismiss video evidence of wrongdoing by crying “deepfake” in the same way they might reject media reports they dislike as “fake news.” But “tailored” deepfakes present a significant threat: in our communities. Popularizing of the term “deepfakes” had a sordid origin in 2018. We are witnessing growing calls to regulate or ban them. Related legislation has been introduced in the US, and in 2019 China made it a criminal offense to publish a deepfake without disclosure. Platforms like Facebook are banning deepfakes that aren’t ridicule or satire, and Twitter is banning deep fakes likely to cause harm or abuse.

How can we spot a deepfake?

There isn’t a list of steps to take that will make us completely immune to being fooled by a deepfake; there are some things to look for that can help explain whether or not what you’re looking at is real.

We can pay attention to the:

  • Face — is someone blinking too much or too little? Do their eyebrows fit their face? Is someone’s hair in the wrong spot? Does their skin look airbrushed or, conversely, are there too many wrinkles?
  • Audio — Does someone’s voice not match their appearance (for example, a heavyset man with a higher-pitched feminine voice).
  • Lighting — What sort of reflection, if any, are a person’s glasses giving under a light? — Deepfakes often fail to fully represent the natural physics of lighting.

Deepfake is not only a threat to our political systems or high-profile celebrities. In March 2021, A woman created ‘deepfake’ videos to harass rivals on her daughter’s cheerleading squad. Later the woman was charged with cyber harassment and related offenses.

Deepfake can be viewed as a threat, but there is an opportunity for the K12 educators to start the conversation about the technology and its implications in our society. It is an opportunity to educate students about AI and applications of generative machine learning techniques such as Generative Adversarial Networks (GANs) to generate new images, music, text, and videos. Today, GANs have become common on social media, a part of children’s lives, and have considerable ethical implications; existing K-12 AI education curricula do not include generative AI. This is an opportunity to teach students about GANs. How do GANs work?

Check out the fantastic work by MIT on Creative AI: A middle school curriculum about Creative, Generative AI and Ethics.

Today’s AI education provides an opportunity to have an inclusive conversation about deepfake, the technology itself, and its moral and ethical implications. We cannot afford to dismiss it. The conversation must start in our classrooms.

Learn more about Ready here.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.