AI Tools Can Detect Deepfakes, But for How Long?

PCMag
PC Magazine
Published in
3 min readAug 30, 2019

UC Riverside researchers developed an AI algorithm that recognizes manipulated images and videos via inconspicuous artifacts left behind by editing tools. But more sophisticated deepfake technology is almost certainly on the horizon.

By Ben Dickson

Deepfakes -those realistic-looking, AI-doctored videos that portray events that never actually happened-have been a source of concern for several years, and as the technology advances, detection grows more difficult.

For the time being, edited images and videos leave digital fingerprints that can be detected with the right tools. Researchers at the University of California, Riverside, for example, developed an AI algorithm that can recognize manipulated images and videos by finding inconspicuous artifacts left behind by editing tools.

This deep learning algorithm-from Amit Roy-Chowdhury, professor of electrical and computer engineering, and UCR researchers-finds anomalies caused by inserting, removing, or manipulating objects in images. The idea, Roy-Chowdhury explains, is to localize image manipulations. “We train a neural network to identify manipulated regions in future images,” he says.

Developing a ‘Well-Trained Neural Net’

Neural networks are the fundamental components of deep-learning algorithms. Unlike classic software, for which developers manually give instructions to computers, neural networks develop their behavior by analyzing and comparing examples.

Neural networks are especially good at finding patterns and classifying messy, unstructured data like images and videos. When you provide a neural network with enough examples of a certain type of image-a process called “training”-it will be able to find similar features in images it hasn’t seen before.

“When someone manipulates an image, they try to do it in a way that is not detectable to the human eye,” Roy-Chowdhury says. “But usually a portion of the pixel space is affected by these manipulations.”

UCR researchers trained their neural network on annotated images that were manipulated with different tools and let it discover the common pixel patterns visible on the boundaries of the affected objects. After training, the AI model can highlight areas in images containing manipulated objects.

Catching a Deepfake

At the current stage, the deep-learning model works on still images, but the same technique can be tweaked to spot deepfakes and other video-manipulation techniques. Deepfakes are basically videos in which every frame is changed to replace one face with another.

“The idea can be used for videos, too. In every frame there’s a region that has been manipulated, and a well-trained neural net can highlight the tampered area,” Roy-Chowdhury says.

The UCR neural network is one of several efforts aiming to catch deepfakes and other image- and video-tampering techniques. Earlier this year, researchers at the University of Surrey developed a combination of blockchain and neural networks to register authentic videos and identify tampered versions. Another project by the University of Albany used two neural networks to detect synthesized videos by finding unnatural phenomena such as unblinking eyes.

But as recognition methods improve, so does the technology for creating realistic forged images and videos. “This a cat-and-mouse game,” Roy-Chowdhury says. “It’s a non-trivial problem. Whatever we do, people who create those manipulations come up with something else. I don’t know if there will ever be a time where we will be able to detect every kind of manipulation.”

Originally published at https://www.pcmag.com on August 30, 2019.

--

--