Fixing FakeApp

John Akers
9 min readJun 4, 2018

--

Written by John Akers, Brendan Jacobsen, and Nancy Tran

Dataset creation tab in FakeApp. After selecting a video file a single button press will separate the frames and isolate the face in each frame. © 2018 FakeApp

FakeApp is a free computer application that uses machine learning to create a neural network that can swap the faces of two individuals in any image or video. The application requires minimal setup to get started and can operate on most mid-level modern hardware. While prior versions of the application required users to do some amount of pre- and post- processing, such as separating the training video into individual frames and resizing them, the latest version of FakeApp automatically does all of the processing for the user and simply requires specifying the file paths to the source videos and resulting datasets which resultantly makes the application even easier to use. The design of FakeApp could be described as simplistic, with a window size similar to that of a smartphone. The application contains only three tabs, each only featuring a few input fields and a respective start button.

What Makes it Evil?

FakeApp on its own is merely a tool for creating realistic manipulations of video footage, with legitimate use cases, but the danger arises from its simplicity and ease of use. The recent prominence of fake news has demonstrated that falsified information can have real impacts in society, and manipulated imagery can be even more damaging to an individual’s reputation than a text based article.

Kate McKinnon’s face swapped with Hillary Clinton’s in a Saturday Night Live sketch. Created by user “derpfakes” on Youtube using FakeApp.

The malevolence of modified video can be seen from multiple different perspectives, with perhaps deontology being the most obvious. From a deontological stance the dignity of a human being should always be respected, thus manipulating video footage to deceive individuals inherently violates that respect for fellow people. Libertarian values are also harmed through the misinformation that can be spread, which intrudes on an individual’s freedom to knowledge and informed decision making.

While these highlighted perspectives demonstrate the negative impacts to the welfare and freedom of society overall, an even more important point to consider is how the individuals targeted by this technology are affected. Due to the nature of face swapping technology, any individual who is targeted without consent is having their freedom harmed by inaccurate representations of themselves that may have even broader impacts in their lives. Some may argue that even when individuals do consent they are impacting their own welfare by bringing into question the legitimacy of any real footage of themselves.

Why FakeApp Specifically?

Video creation tab in FakeApp. After training a model the target video can be selected and the application then changes the face in every frame. © 2018 FakeApp

A dedicated and skilled individual could always spend hundreds of hours manually doctoring images and videos for malicious reasons. However, FakeApp makes this process significantly quicker and lowers the barrier of entry to any individual with spare free time. Here in lies why FakeApp could be considered evil: it drastically increases the likelihood for malicious footage to be created through the level of accessibility it provides to the technology.

Admittedly the current version of FakeApp does have one small feature to mitigate this danger in the form of a watermark that is automatically applied to every frame of the generated video. However, the watermark is relatively small and always placed in the bottom-right corner, meaning it can be removed with basic video editing skills through methods, such as adding padding to the target video and then cropping out the blank region with the watermark after the new video has been generated. Thus, while this approach eliminates any accidental confusion in footage by people who innocently use the application, it only presents a small hurdle for someone with malicious intent.

This largely laissez faire approach to the ethical issues of auto-generated content can be seen even more clearly on the developers’ website from which the application is downloaded. While other websites in the same domain such as Lyrebird, a voice synthesizing software for text-to-speech, contain entire pages on the ethics of the software and provide a free service to verify the legitimacy of recordings, the FakeApp website includes no such discussion of the ethical implications nor even so much as a warning about any illegal use of the software.

An argument could be made that the largely unrestricted access to FakeApp supports the freedom for individuals to make their own ethical decisions, but ultimately the potential dangers it imposes are far too severe to leave the application as it is now. Indeed, a synopsis of the current contract between the designers and users of FakeApp would be that, besides forcing a small watermark onto the generated footage, there simply are no restrictions or guidelines for users to follow. It is this underlying potential for manipulation with FakeApp that we intend to remove.

User Research

After realizing the potential for misuse, we conducted a short user research survey to analyze the ability of individuals to identify falsified videos. With this survey, we were able to better understand the extent of how to redesign the app to prevent users from manipulating other users with modified videos. At the beginning the research process, we sought to design a test for analyzing individuals’ aptitude to identify between faces created with the app and videos without any manipulation. We found examples of videos created using FakeApp from different sources and found counterpart examples of the original clips before manipulation with FakeApp. The videos were cropped to remove the watermark created by FakeApp and designed to be fairly similar to focus the user on simply the face of the subject. All videos were muted to avoid biasing the results of our test with judgments of the authenticity of the audio.

User Testing Video. Answer Key: 1.Fake, 2.Fake, 3.Fake, 4.Real, 5.Real, 6.Real, 7.Fake, 8.Real, 9.Real, 10.Fake, 11.Fake, 12.Fake, 13.Fake, 14.Real, 15.Real

Out of these 15 clips, 8 were fake clips and 7 were real. With these 15 clips ranging from replacing a Saturday Night Live actress with Hillary Clinton to real footage of Nicolas Cage on a late night show interview, 6 users were tasked with determining which of them were real or fake. Participants watched the video in full, pausing after each clip to provide their decision and other thoughts about the footage.

Testing results of 6 users viewing our test video.

From this test, we found that users were, on average, only able to identify content correctly as fake or real 66% of the time and were able to accurately identify fake clips about 40% of the time. This shows our sample set of viewers struggled to identify doctored content from real content, even with the prior knowledge that they were looking for modified footage. In discussion with our participants, most of the content they were able to correctly identify was determined by if the users were familiar with the chosen actors, but struggled with people who they found less recognizable.

Our Redesign

There is a significant potential for misuse with FakeApp for creating media that could be detrimental if shared. We determined from our user testing that users regularly failed to identify fake videos from real ones, presenting a clear opportunity for individuals to create content to manipulate opinions of other viewers. By creating content with FakeApp, any user with a computer and the ability to follow a short tutorial can create fake news stories or generate doctored video evidence of individuals saying things they never said. This ability for harm lead us to develop solutions to limit negative use cases of the application.

Currently, the only remedy to identify fake content is a small static logo in the bottom right of the frame to identify the product as user-generated. While this design choice limits some abuse, we believed there was too much of an opportunity to spread generated content due to the lack of complexity of the identifying mark. Basic video editing tools give users options to either crop or overlay content on top of the mark, while still showing the face of the subject, and users can still see the end product of the face without realizing the content is fake. We wanted to address this by making something with less accessibility for removal.

We took an approach that would solve the issue of misuse of this application through the use of a watermark over the identifying content. In redesigning the application, we left most of the primary navigation the same and instead focused our efforts in redesigning what the final product of the video would look in order to limit its use for harm.

Example video using redesigned watermark format.

The watermark has multiple checks in place to make removal through video effects software excessively difficult. Text stating “Created With FakeApp” floats over the subject’s face during the video, obscuring the content partially to alert the user to the relevant area of the video. This mark also moves in a random pattern, making it difficult to remove frame by frame, and the added randomized level of transparency makes it difficult to automate the process. These steps make the product of the app clear to the viewer and apply a universal level of information to prevent any misuse of the application.

This solution provides an accessible warning with all videos generated by FakeApp. We considered other solutions, but the watermark solution was the only realistic design to limit the most harm. One alternative we considered was designing data that could be used to alert the user through the video player that the content they were watching was doctored. Using metadata attached to the finished clip, video players could warn the user that the content involves media created with FakeApp. We quickly realized this solution would run into the challenge of getting video players to adopt this feature and display warnings. Also, using screen capture software, users could hide warnings from the video and create clips without the metadata. The watermark being included in the clip was the best solution to limit the user’s options to remove this protection.

While limiting a technology is nearly impossible, designers can make choices in creating products to limit the chance for abuse. We took an approach to limit the harm of the products created using FakeApp and believe users should be informed whether the content they watch is manipulated or not. The watermark solution accomplished our set of goals to alert the user if the content they are watching was created with this tool, and we believe future tools should adopt similar principles in their design of informed consent of the viewer.

New Design Contract

Altering the current design contract between FakeApp and its users requires first acknowledging that users can use FakeApp for malicious purposes due to its libertarian implications. Recognizing the consequences of those implications leads to our proposition of a new design contract with the user.

Example of watermark over doctored footage of Hillary Clinton. Original clip by user ‘derpfakes’.

It is important to remember that FakeApp is not evil on its own, but rather it can be used for malicious purposes, therefore making it evil. Users can create clips for harmless fun and laughs, such as memes, or even creating high-quality video edits equivalent to a million-dollar movie budget. However, not considering the consequences on various kinds of users can have a long-lasting, negative impact on other people’s lives, including their knowledge. As much as anyone can argue that it is up to the individual to be as informed as possible, there needs to be a control set in place that allows for there to be less misinformation and, most importantly, does not allow people’s sense of welfare and security to become at risk.

Our new design contract therefore includes asking the user to agree to a terms and service contract while explaining to the user that by using this product, they agree to not use it for malicious purposes. They also become aware that there will be an apparent moving watermark on top of the person’s face on any video clip that is output from this application and the reason for it, which is to help mitigate consequences of the doctored video clip.

What Can Be Learned?

Even with an apparent opaque watermark moving across the screen, we recognize that the watermark itself can still technically be removed if the user is patient enough to try to remove it themselves, but by targeting the average audience who knows little to nothing regarding video editing, we have at least managed to help mitigate the consequences. It is important to recognize that with FakeApp, there is only so much we can do.

The lesson we discovered, no matter how cynical and dark it may seem, is that in this day and age, people should begin to not trust anything they immediately see. This applies to not just doctored videos created using FakeApp, but also other sources that spread misinformation, regardless of whether it contains political content that can sway voters’ opinions.

In today’s society, we approach photos of models with the notion that they are all photo-shopped, unless stated otherwise. If we approach doctored videos the same way, then we as a society can move one step towards making informed decisions, despite the growing amount of misinformation that exists from fake news articles, misleading political content, and doctored videos paired with doctored audio.

--

--