Deepfakes for Good

Ethical Deepfakes

AI and ethics don’t exist in a social vacuum

Bruno Sch_
Bloom AI

--

There’s much alarm surrounding deepfakes and AI.

This is a maturing field of artificial intelligence. And we should remember to not throw out nascent AI with the proverbial bathwater. Instead, we should approach deepfakes and their implications with diligence, curiosity, and understanding.

This short piece deconvolves a few kernels of insights from various AI experts. A few questions guide this inquiry: What’s all the hype with AI and deepfakes? What are their consequences? And how can we take responsibility for AI?

What’s all the hype?

When people read the word “deepfake,” they may associate the term with chaotic scenarios, where manipulated media can derail elections, disrupt news and journalism, and destroy individual reputations.

Are these scenarios plausible? Yes. Have they occurred? In fact, yes.

Is there more to deepfakes than the chaos they can cause? Of course.

Deepfakes are synthetic media. They’re the product of a system of complex algorithms trained on media to produce more media. More specifically, they’re often generated by GAN models, where two interacting neural networks can create media of what does not exist outside of computer vision.

Applications range from generating audio to restoring and recoloring historical media to creating a new human face every time browser refreshed.

Simply put, deepfakes are just media. And yet, media can deeply affect human reality.

AI is complex. How it functions, the problems it solves, its consequences on society are all complex. So understanding this complexity requires experts well practiced in its science and acutely aware of its social impact and consequences. Instead of sensationalizing deepfakes, we need informed and nuanced discussion of how to understand and ethically develop AI in all its complexity.

As Sven Charleer writes:

But don’t blame Deepfakes. Don’t demonise the technology. Thank Deepfakes!

Technology is value neutral. Ethics are not

Technology is neither good nor bad. It exists, and it affects human lives and culture based on how humans decide to use it. These are the basic claims that underpin the “value neutrality” theory of technology.

What then raises questions of ethics are what human decide what to do with various technologies. Technology like AI does not exist in a void, but within the social context that creates, develops, and applies machine learning and neural networks into the interfaces that shape our lives. As a result, it’s not particularly controversial to argue that responsibility for technological consequences rests squarely in human hands.

So, no, deepfakes are not unethical. Maybe it’s silly to ask if deepfakes are ethical, even whether they are good or bad (though that’s a reasonable question when assessing their quality). Of course, people can and do use deepfakes for nerfarious and harmful ends. And yet, because humans develop and use AI to affect and intervene in human lives, deepfakes are part of the deeper and larger conversation about the ethics of scientific research and technological development, which encompasses nuclear energy to biotechnology to AI to unrealized frontiers.

Synthetic media is no different. It must be researched and developed simultaneously with safeguards, like detection of AI by AI.

Problems with deepfakes

Several problems involve AI-generated media, like consent, image rights, misinformation, and cybersecurity.

According to Guarav Oberoi, the term “deepfakes” arose when a user by that name began to post synthetic media on Reddit back in December of 2017. The posts, however, contained explicit videos of celebrities, and even though in February of 2018 Reddit began banning communities that shared such media, the term “deepfake” became tied to these kinds of posts.

As YouTuber Tom Scott explains in the video below, synthetic media is not difficult to create with software like DeepFaceLab, and the internet abounds with images and videos people uploaded of themselves. The ethical problem is one of consent, when people whose media are being used to create deepfakes lost control over their image rights and most likely would object to the created deepfake.

Misinformation is another problem. Humans are not good at detecting manipulated media! Consider that the human eye is worse than a coin flip at detecting deepfakes, especially on the screens of mobile devices, where lots of internet browsing happens.

In context of an election or a high-stress political or economic moment, manipulated media can propagate around social networks, reinforcing misinformation detrimental to society.

And several cybersecurity firms have sounded the alarm over how synthetic media, visual or audio, can be used for social engineering and other mayhem in the business world.

Ultimately, it’s impossible to prevent deepfakes from being created and to some extent distributed. So what is to be done?

Responsible AI

Responsible AI means keeping in mind the above problems and wider ethical issues while developing technologies that generate media. It also means developing, in parallel, the methods that enable their detection.

Since deepfakes have existed, scientists, researchers, and developers have made real strides to detect synthetic media as they become more widespread around the internet.

Detecting deepfakes across websites (like social networks, image boards, and video-hosting sites) functions as part of the safeguard that tracks the reach of altered media, how they’re being generated, and what they’re being used for. This effort needs more than a few distributed algorithms to work: systemic problems require systematic thinking.

Back in 2018, researchers were able to identify deepfakes by observing how often subjects in videos blinked: real people blink quite frequently, but deepfakes tended not to.

Companies have made positive strides with respect to deepfake detection. In September of 2019, Facebook made launched the Deepfake Detection Challenge to open source the effort of detection with developers and corporate partners. A year later, Microsoft unveiled a tool for authenticating video media, as did two Estonian entrepreneurs at Sentinel, whose detection system could rival Microsoft’s.

It’s also not surprising that DARPA has funded projects to detect the proliferation of deepfake across the net. Considering the potential security risks of deceptive media, DARPA’s program focuses on automating their arsenal of forensics tools to detect AI-altered media, but it’s clear to researchers that their methods may often be a few steps behind those of deepfake creators.

It’s important to reaffirm that deepfake creators bear the responsibility for what they create. FastAI, a popular online course for teaching deep learning, encourages its students to grapple with real ethical questions while they learn how to program and train AI.

And the internet abounds with courses and lectures on AI and ethics, like MIT’s online course on the ethics and governance of artificial intelligence. There’s enough content on the internet to guide sensible and informed discussions of AI, its impact on human lives, and how to address its development and consequences

So there’s no need for alarm about deepfakes. What’s needed is more research on synthetic media and better systems for detecting them. Because ultimately, since the only real way to stop deepfakes is by shutting down the internet, what we need are informed and educated internet citizens that understand deepfakes, AI, ethics, and their impact on society.

We’re in private beta!

Check out out website to find out how to get early access to our cutting-edge, photorealistic, and fully customizable human image generator:

https://www.gobloom.ai/

And subscribe to our blog for updates on the latest in artificial intelligence, deep learning, and synthetic media!

--

--

Bruno Sch_
Bloom AI
Writer for

a writer and data scientist @ Bloom AI. We’re in beta: https://gobloom.ai