Deepfakes: Finding Old Ideas in New Faces

Charge
Charge VC
Published in
5 min readSep 25, 2020

Oliver Taylor was many things. A university student, coffee lover, and political junkie. A news media freelancer with six or so editorials, including bylines in the Jerusalem Post and the Times of Israel. By most measures, Oliver was rather average. Maybe too average.

The trouble with Oliver started in late 2018.

London academic Mazen Masri had just drawn international attention for launching a lawsuit against the Israeli surveillance company NSO Group, on behalf of alleged Mexican victims of the company’s phone-hacking technology. In an article in U.S. Jewish newspaper The Algemeiner, Taylor accused London academic Mazen Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.”

Masri and Barnard were both shocked by Taylor’s allegation, which they flatly denied. They were also completely mystified why a seemingly random university student would single them out. Shown Taylor’s photo, Masri said something just “seemed off.” Further digging only uncovered more questions.

A computer-generated ghost

There is no record of Oliver Taylor at his university. He has no detectable online footprint prior to March 2018. The two newspapers that had published his work say they have been unable to confirm his identity. Six experts say Oliver Taylor’s image has the characteristics of a “deepfake” — a realistic computer-generated image of a person.

“The distortion and inconsistencies in the background are a tell-tale sign of a synthesized image, as are a few glitches around his neck and collar,” said digital image forensics pioneer Hany Farid, who teaches at the University of California, Berkeley. Artist Mario Klingemann, who regularly uses deepfakes in his work, said the photo “has all the hallmarks” and added, “I’m 100 percent sure.”

Deepfakes like Oliver Taylor are becoming increasingly common, and dangerous. They can help build “a totally untraceable identity,” said Dan Brahmy, whose Israel-based startup Cyabra specializes in detecting such images. Brahmy added that investigators chasing the origin of such photos are left “searching for a needle in a haystack — except the needle doesn’t exist.”

Return of the neural nets

Deepfakes like Oliver are generated through a technique called “deep learning,” the same technology that powers Google’s real-time translator. Deep learning is a new name for an old approach to artificial intelligence called neural networks, which have been in and out of fashion for more than 70 years. First proposed in 1944 by Warren McCullough and Walter Pitts, researchers at the University of Chicago, neural networks were mostly abandoned by researchers by the 70s. The death knell came when Marvin Minsky and Seymour Papert, founders of the MIT AI Laboratory, published “Perceptrons. ” In the book, they argued that the prevailing approach to neural networks at the time could not be translated effectively into the multi-layered neural networks used by the brain. In the face of this supposedly insurmountable scale problem, the field moved on.

The recent resurgence in layered neural networks, the so-called deep-learning revolution actually began with computer-games. The complex imagery and rapid pace of today’s video games require commensurate hardware that can keep up, resulting in modern graphics processing units (GPU), which packs thousands of relatively simple processing cores on a single chip — in some ways remarkably similar to a neural net. This video-game fueled advancement enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to become the 10-, 15-, even 50-layer networks of today, though 30- layer networks are more common; putting the “deep” in “deep learning”.

Due to improvements in the field in the last decades, these “deep learning” neural nets now routinely recognize images and interpret natural language more accurately than the humans that programmed them. But while much remains unknown about neural nets — inside the black box of self-adjustment and machine learning — one data set that they have proved astonishingly good at both generating and detecting synthetic media of human likenesses, or deepfakes.

Startups Israel-based Cyabra and Netherlands-based Sensity, are leaders in the visual synthetic media detection space but the giants of tech aren’t far behind - in September 2020 Microsoft launched its own deepfake detection tool. Several nation-states are also reported to be building sophisticated detection toolsets, given the national security implications of deepfakes.

Microsoft’s Deepfake Detection Tool in Action

As the market heats up, concern is rising in both Washington and Silicon Valley. Last year House Intelligence Committee chairman Adam Schiff warned that computer-generated synthetic media video could “turn a world leader into a ventriloquist’s dummy.” Last month Facebook announced the results of its Deepfake Detection Challenge — an open-source competition intended to help develop tools for researchers automatically identify falsified footage.

Still from MIT’s Moon Disaster Deepfake Project. This is NOT Richard Nixon

Is there a balance to be struck with these incredibly powerful tools- a way to turn these swords into plowshares? Oliver Taylor may have been synthetic but his ideas and content were real. Did they deserve to be heard, even behind a synthetic face? Opinion Editor Miriam Herschlag, of the Times of Israel tried to draw a distinction: “Absolutely we need to screen out impostors and up our defenses,” she said. “But I don’t want to set up these barriers that prevent new voices from being heard.”

This post was written by Justin Clapper and edited by Brett Martin at Charge.vc. Previously in our Summer of Synthetic Media series, we looked at what happened when Synthetic Media went mainstream in the twentieth century. In our next post, we’ll look at the development of generative adversarial networks (GAN) and what they mean for the industrialization of synthetic media content. If you are working in or thinking about the Synthetic Media space, we’d love to connect! Get in touch with the team at Charge.vc!

Bibliography

Satter, Raphael. “Deepfake Used to Attack Activist Couple Shows New Disinformation Frontier.” Reuters. Thomson Reuters, July 15, 2020. https://www.reuters.com/article/us-cyber-deepfake-activist-idUSKCN24G15E.

Larry Hardesty | MIT News Office. “Explained: Neural Networks.” MIT News, April 14, 2017. https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414.

“A Beginner’s Guide to Neural Networks and Deep Learning.” Pathmind. Accessed July 27, 2020. https://pathmind.com/wiki/neural-network.

Eustachewich, Lia. “MIT Creates Disturbing ‘Deepfake’ Video of Nixon Announcing Apollo 11 Disaster.” New York Post. New York Post, July 20, 2020. https://nypost.com/2020/07/20/mits-deepfake-video-of-nixon-announcing-apollo-11-disaster-surfaces/.

Levy, Steven. “Inside Deep Dreams: How Google Made Its Computers Go Crazy,” June 16, 2017. https://www.wired.com/2015/12/inside-deep-dreams-how-google-made-its-computers-go-crazy/.

--

--