David Doermann, former MediFor head at DARPA, joins Amber Video

Amber
Amber Video
Published in
7 min readFeb 25, 2019
David Doermann, Advisor at Amber Video, Director of the AI Institute (UB), and former PM of MediFor at DARPA / CBS NEWS

Dr. David Doermann is an early pioneer of fake video identification.

He founded the famed media forensics program, or MediFor for short, at the Defense Advanced Research Projects Agency (DARPA), part of the United States Department of Defense and responsible for the development of emerging technologies for use by the military.

He’s been thinking about fake audio and video — and its consequences — for decades.

Imagine a video is released by a terrorist group in the NW Frontier, Pakistan and contains threats against the United States.

Is the video real? Should the US send expensive assets into action to confirm or preemptively strike? Is it a fake by a competing group or another country who wants to see the US engaged in a needless, distracting battle?

Without confidence in a video’s veracity, subsequent decisions based on disinformation could be wildly flawed.

The impact of fake video may not be as grave to many users as they are to the DoD but the costs are still very real. Solving fake video and mitigating the grave and economic harm at scale is at the core of what we do at Amber Video.

I am incredibly grateful that David is on our team at Amber Video and helps us develop those threat models and understand the intent of bad actors and the ever-evolving ways they seek to deceive.

Dr. David Doermann is a Professor of Empire Innovation and the Director of the Artificial Intelligence Institute the University at Buffalo (UB). At DARPA he developed, selected and oversaw research and transition funding in the areas of computer vision, human language technologies, and voice analytics. From 1993 to 2018, David was a member of the research faculty at the University of Maryland, College Park.

In his previous role in the Institute for Advanced Computer Studies he served as Director of the Laboratory for Language and Media Processing and as an adjunct member of the graduate faculty for the Department of Computer Science and the Department of Electrical and Computer Engineering. He and his group of researchers focused on many innovative topics related to analysis and processing of document images and video including triage, visual indexing and retrieval, enhancement, and recognition of both textual and structural components of visual media.

David has over 250 publications in conferences and journals, is a fellow of the IEEE and IAPR, has numerous awards including an honorary doctorate from the University of Oulu, Finland and is a founding Editor-in-Chief of the International Journal on Document Analysis and Recognition.

Welcome aboard David, we are grateful to have your wisdom on the team.

1. Give us the origin story of Media Forensics at DARPA/US Military. What is that you foresaw clearly, virtually before anyone else?

The MediFor program was conceived as a realization that we have traditionally been able to trust our visual media but it wouldn’t be so for long. While the written word has alway been open to interpretation, we typically adhere to the adage “seeing is believing”.

We began to notice our adversaries providing manipulated content, initially in open source media and later in state-supplied publications. We also saw the wide variety of manipulation capabilities for image and video available to novices (even on mobile devices) while few tools existed that were robust enough and scalable enough to address the problem and protect our national security.

While some academic work was being done, it was clear to me the problem required a unified approach and with funding only an organization such as DARPA could play a leadership role in.

2. What was the biggest surprise you encountered while working on Media Forensics?

The biggest technological surprise was the rapid development of automatically generated “fake” content by amateurs with the use of Generative Adversarial Networks (GANs). Five years ago, no one could imagine anything more than an advanced avatar generated completely automatically, let alone realistic images of people, places, and objects that do not really exist. While algorithms such as deepfakes modify existing video and often leave trace evidence on the modification that can be detected with today’s technology, systems that generate content from scratch are much more difficult to detect.

The fact that images, audio, and video can be generated overnight by relative amateurs with a computer and a GPU card suggests that we have a lot of work ahead for when our adversaries decide to create disinformation and propaganda content, from scratch, and at enterprise scale.

3. What achievement that came out of MediFor are you most proud?

I am most proud of the fact that we were able to bring the best researchers in the world to address this problem in a unified way. It was clear from the beginning that no single approach would be able to make a dent in the problem. The program stimulated the research community and brought awareness to the fact that we are working on solutions, not only awareness in the government, but also with the general public.

4. Has MediFor achieved its mission? What’s next?

MediFor is absolutely achieving its mission of closing the gap between the creators of modified content who want to use it for deceptive purposes and organizations who require tools to identify it. But there is no doubt that this problem requires a long term commitment, not only for research but to change users’ attitudes to sharing and posting misinformation. As long as we continue to propagate this type of media, especially when it is not clearly intended for entertainment, it will be difficult to stop.

What’s next? We need to find a way to get this technology into the hands of individual users that want to question what they are exposed to and to get a commitment from the gatekeepers of our media to provide tools when they are desired.

5. Why did you join Amber Video and what are you most excited for?

Amber Video, from the outset, took a fundamentally different approach to ensuring the integrity of media. It has the IP and capabilities of providing an efficient cryptographic representation of the video that can be used to detect possible manipulation and it does this in a way which is in keeping with the nature of audio/video recordings: they are clipped and clips are combined. This can ensure that content, and their derivatives, can be secured from recording to distribution.

While MediFor focuses on the users who are consuming the data — and Amber now has tech in this area too — Amber’s Authenticate technology allows content providers to guarantee the content provided to the user, with its immutable and transparent chain of custody, is authentic.

This type of out-of-the-box thinking by Amber is critical to countering nimble, well-resourced bad actors who are intent on sowing chaos and undermining democracies.

I am also very excited by Amber’s Detect, its complementary software approach for post-facto detection. Getting Authenticate to fingerprint at source is the best way to identify veracity but it will take time to get on the recording hardware. Supplementing Amber Authenticate with Amber Detect presents a shrewd 1–2 approach, as it analyzes keyframes, artifacts, metadata and audio tracks in the pursuit to identify does this uploaded video contain a malicious alteration that seeks to sow disinformation and distrust?

6. You are an AI expert: what are you most excited about its application to?

Computers have always been good at supporting processes that humans find difficult such as sifting through large amounts of data and analyzing it much faster and with greater accuracy. But now AI is being developed which can support not only more efficient computation, but also assist the cognitive processes of its users. We are entering a time when human/machine partnerships will prove to be able to outperform human or machines individually. As tech advances, computers will surpass being just massive data processing machines which learn and feedback information and instead know and supplement the strengths and weaknesses of their human partner, guiding them in their decision process.

7. What do most people misunderstand or not get about fake news, disinformation and propaganda, and deepfakes AI?

I think that people do not get that these issues have nothing to do with inaccurate or false information. This content is being generated to satisfy an agenda and it is being fed to people with similar views who will not question it, even when presented with evidence to the contrary. That is one thing that makes fake video so very difficult to counter.

8. Has the Internet made society more vulnerable and if so, how:

I think it is a combination of things. Clearly the Internet has contributed to the fact that information, both accurate and inaccurate, can spread much faster around the world. But technology has also advanced so that we can manipulate information easier and in many cases automatically. These factors, combined with the fact that we are exposed to so much more information and we can use the Internet freely to advance any agenda we want, makes the Internet a breeding ground for deceptation.

9. What keeps you up at night?

The realization that our adversaries will soon be able to create entire campaigns, with multiple modalities of text, image and video content, that will spread quickly by those who do not carefully consider the possible consequences — and we may have little we can do to protect ourselves in the first stages of such an “attack”.

--

--