How DeepFakes Could Restore Privacy
Over the past decade video surveillance has become ubiquitous and significantly eroded privacy in every aspect of our lives.
The sad truth is that anyone with a laptop and an internet connection can create realistic videos of anyone doing or saying almost anything today. DeepFake technologies allow users to synthesis human images (both pictures and video) using easy to use machine learning generative adversarial networks. Observers are warning our leaders that DeepFakes could be used to interfere with elections, wrongly convict suspects, and harass or embarass any of us. But I can’t help but wonder if the opposite could be true. What if DeepFakes could actually give ALL of us back the privacy we’ve lost from ubiquitous video surveillance?
Experts suggest that the average human is caught on by fixed surveillance cameras more than 50 times each day — if you include city and traffic cameras the number would increase to more than 100 times each day. Of course, this doesn’t take into account the the 2.7 billion people who carry smartphones with video camera capabilities. More than a billion of whom upload more than 300 hours of video to YouTube each MINUTE. You might be surprised to learn that video evidence is used in 80% of criminal prosecutions. Video can be used to instruct, inform, explain, harass, embarrass, or convict. Good or bad video is here to stay and it is only going to get worse. But what if we could no longer trust the video we saw online?
More than a decade ago I recall seeing the following video of Bruce Lee playing ping pong with nunchaku on YouTube:
The video was uploaded without any explanation and seemed VERY realistic — that was the goal of the ad agency who commissioned its creation for Nikon. The video was created by using a look-alike actor playing pretending to play ping-pong with nunchaku. They added the sound and the ping pong ball in post-production making the video look very real. People STILL debate whether or not it is a real video of Bruce Lee (it is not).
Fake imagery, video or otherwise, is nothing new. What is new is the democratization of fake video that artificial intelligence has made possible. Soon everyone will know how simple it is to create a DeepFake video. I predict two things will happen — first, there will be tsunami of fake videos making their way to the internet — second, people will stop trusting video as evidence of fact. AI expert Alex Champandard explained that DeepFakes, “could throw society back one hundred years, at a time when images and videos were not yet true.” He went on to stress that the solution would not be found through technology but through education, “It will be about establishing a relationship of trust with journalists, press houses, publishers or other sources of information.”
I think the unintended consequence of DeepFakes will likely be that we get our privacy back. If video can no longer be trusted to convict us in the court of public opinion or in the actual courts I suspect we’ll be one step closer to regaining some of the privacy we’ve lost over the past decade or so. Next time a video of Scarlett Johansson having sex on video (real or not) we’ll all simply assume it is a fake. Governments across the globe are passing laws to designed to stop DeepFakes, but it is simply too late. The cat is out of the bag and AI and its impacts are here to stay — hopefully the unintended consequence will be to restore a little of our lost privacy.
About The Author
Alexander Muse is a serial entrepreneur, author of the StartupMuse, contributor to Forbes and managing partner of Sumo. Check out his podcast on iTunes. You can connect with him on Twitter, Facebook, LinkedIn and Instagram.