AI&.
Published in

AI&.

How to Share the Tools to Spot Deepfakes (Without Breaking Them)

Beware of impostors

Framing the Detection Dilemma

The Who, What, and How of Detection Access

Who gets access?This involves identifying types of actors and then specific organizations and individuals.What access does the “who” have?This includes the “strength” of the detection tools each “who” can access, the types of access and training they have and who they can reach for support.How is the “who” chosen?This involves governance:  both setting up initial vetting processes and evolving as needs change.

Theme 1

“It is possible to give an explanation and understand what is happening instead of just saying it is fake or not.”
—Luisa Verdoliva

Theme 2

“The best strategy for us as a community is to continue to invest in education of the public on how to consume media.”
—Chris Bregler

Theme 3

“Even if these things work and they’re perfectly distributed, how can we ensure people believe that they’re outputting the right results?”
—Pedro Noel

Theme 4

“It’s necessary to consider the question of how we design the selection criteria and processes, and whether those criteria should look the same in all markets. Especially if we’re talking about equity.”
—Rosemary Ajayi

Theme 5

  1. Complete access to source code, training data, and executable software. This provides unlimited use and the ability to construct better versions of the detector. In the case of unauthorized access, this would allow an adversary to easily determine the detector algorithm and potential blind spots which are not included in the training data.
  2. Access to software that you can run on your own computer. E.g., a downloadable app. This provides vetted actors unlimited use. In the case of unauthorized access, such an app provides adversaries the opportunity for reverse engineering, and an unlimited number of “black box” attacks which attempt to create an undetectable fake image by testing many slight variations.
  3. Detection as an open service. Several commercial deepfake detection services allow anyone to upload an image or video for analysis. This access can be monitored and revoked, but if it is not limited in some way it can be used repeatedly in “black box” fashion to determine how to circumvent that detector.
  4. Detection as a secured service. In this case the server is managed by a security-minded organization, to which vetted actors are provided access. If an adversary were to gain access to an authorized user’s account, a suspicious volume or pattern of queries can be detected and the account suspended.*
  5. Detection on demand. In this case a person or organization which does not normally do digital forensics forwards an item (escalates) to an allied group which has one of the access types described above.
  6. Developer access only. A single organization controls the unpublished technology, does not disclose the existence of the detector, and never allows external parties to infer the predictions of the detector.

Where Do We Go From Here?

  • Create a system of media forensics trainers globally, across regional contexts
  • Develop a coordinated media and information literacy campaign in multiple languages, with emphasis on localization (with local experts featured)
  • Focus simply on literacy and prevention around manipulated media
  • Training around detection technology (What kinds of artifacts are detection models picking up on when run on a deepfake video? What are the limits of black box detection models and their explainability?)
  • And, at the most extreme, share how useless detectors are for the general public and emphasize that they’re tools for arriving at truth but not ground truth itself.
  1. While detection is imperfect, it can be a useful tool and technology for mitigating the impact of malicious manipulated media.
  2. Detection should be complemented by other media verification tools, including provenance signals and infrastructure.
  3. Training, support, and education for those using detection tools are just as integral to the utility of detection as the actual robustness of the models. If interpreters are not aware of the limits of detection, as well as the meaning of such signals as only a component part to evaluating the truthfulness of content, then they will be rendered ineffective.
  4. Detection tools and technologies must be meaningfully deployed to journalists, fact-checkers, and civil society around the world, without sacrificing detection utility due to adversarial risk.
  5. Establish an infrastructure in which detection is deployed as a secure service, likely by independent nonprofits with regional sensitivity around the world, will best alleviate the detection dilemma.
  6. Encourage escalation approaches to be built out in order to mitigate the disparities in access to forensic capabilities between countries.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Partnership on AI

The Partnership on AI is a global nonprofit organization committed to the responsible development and use of artificial intelligence.