Controversial video of CNN’s Jim Acosta and the White House intern: why we need a truth layer for video
We think a lot about when video can be maliciously manipulated at an ease similar to applying an Instagram filter: what happens when we can’t trust what we are hearing and seeing?
The conversation of fake video took a step forward yesterday when the White House tweeted a controversial video that purportedly shows “inappropriate behavior” of CNN’s Jim Acosta towards an intern. Was the video doctored or authentic? There are vociferous supporters on either side.
What we do know unequivocally is that a vast amount of mindshare has been spent on that question in just the past 2 days. Arguably, it has been a distraction and the focus could have been better spent on more substantive issues. But this is just the tip of the iceberg.
Fake videos are coming. And it will get increasingly easier and more automated to maliciously edit a recording to skew a story to one’s narrative.
This why my team and I have been building a “truth layer” for video with Amber. Amber can’t prevent those intent on sowing chaos. It can, though, remove some of the fuel of chaos and resolve reasonable questions by reasonable people around whether a video is authentic so they can focus on discussing the implications of what a genuine video depicts.
Here’s how Amber could have neutered questions around the Acosta video’s veracity:
- The Amber technology would be in the cameras that C-SPAN used to record the press conference with President Trump. The recording would be fingerprinted and a chain of custody would be created for that footage.
- The footage from those cameras would be shared with stakeholders such as media organizations and the White House communications team.
- Media organizations would view the footage and create and combine clips of the relevant parts. Those clips, even in a combined video composed of a number of shots and soundbites, would maintain a mathematical linkage to the original footage. The fingerprint of a video persists throughout its life, from recording to post production to distribution.
- If a video is uploaded to Twitter and Twitter incorporated Amber’s tech, the video’s fingerprints would be compared to the original ones. If the video had alterations, Amber would identify where in the video they are and communicate that to the platform who could choose to communicate that to their users. In a fake video event, it is likely that most of the video would be authentic but only a snippet would have been doctored. Amber works at a granular level and shows which segments are inauthentic.
- Amber doesn’t abstract trust away to itself; Amber is increasing trustlessness. People don’t need to trust Amber if they have doubts. They can run the fingerprinting process and compare it to the original ones, while looking at the provenance of the video. Sure, it is not the simplest of tasks but if someone has doubt and the situation merits a degree of trust, it is actually quite straightforward.
Reasonable people can then have unequivocal confidence whether a video is authentic or not. (Sorry, we can’t do anything about conspiracists.)
And what to do about someone’s particular actions in an authentic video…that is still very much open to human debates.
To find out more about Amber’s technologies, see our website: https://ambervideo.co