Seeing Isn’t Believing

A pessimistic view of privacy and security in 2035.

Kim Brown
4 min readOct 29, 2018

2018

We live in a time of rapidly-evolving technology. Preserving memories and recording events is inexpensive and convenient. Today, video and photography is a trusted depiction of actual events. Smart phones, gaming systems, laptops, tablets, smart home devices (like Amazon Echo), security cameras, web cams, and traffic cameras are just some of the recording devices that we come into contact with on a daily basis. But what if these cameras were recording our every move or worse what if they were hacked?

Video is one of the most powerful forms of fighting back against crime and unfair treatment including police brutality, racism, and neighborhood crime. It’s used by brands, police, journalists, reporters, and even lawyers in a courtroom. However, a growing number of scientists and government officials are concerned about deep fake video editing. Fake videos and photos are not only difficult to spot; they’re also really difficult to forget. According to Vox, our brains are susceptible to forming false memories and fake videos are able to completely rewrite our memories.

Artificial Intelligence is able to analyze video quickly and pinpoint possible crime or a specific person. Currently, a number of police departments use real-time face recognition which scans faces of pedestrians walking down the street. According to the Center for Privacy & Technology at Georgetown Law, one in two American adults is in a law enforcement face recognition network.

The problem with face recognition is that is not 100% accuracy. Multiple studies report that african americans, women, and minorities are more likely to be misidentified by facial recognition technologies. Researchers from MIT Media Lab found that facial recognition algorithms could be erroneous up to 35 percent of the time.

A number of major police departments use real-time face recognition which scans faces of pedestrians walking down the street. According to the Center for Privacy & Technology at Georgetown Law, one in two American adults is in a law enforcement face recognition network.

As deep fakes continue to improve in quality and mass surveillance increases, how will digital video recordings be able to prove authenticity? How will be able to distinguish between actual events and manipulated videos?

A Pessimistic Future Scenario

It’s 2035. Deep fake technology has created a nation where you can no longer believe what you see. Video and photography are no longer admissable in court and people are being framed and unable to clear their name. Instead of innocent until proven guilty, defendents are guilty until proven innocent.

Around 2018 the United States adopted a public mass surveillance system, along with a national facial recognition database. Growing tired of privacy violations and biased facial recognition systems, the public focused on regaining privacy and created advanced anti-facial recogntion items. Masks, makeup, and blankets concealed identities, and the technology evolved into masks that caused A.I. to believe they were someone else completely.

A desire for privacy led to a societal transition Companies, such as Factom in Austin, were among the first to use blockchain technology to encrypt photography and video. The U.S. government used this technology to watermark presidential press conferences, and news channels quickly followed to prove authenticity. By 2030, individual body cameras were created with blockchain-based encryption, which allowed users to prove their time and location. Digital video forensic experts became necessary in court cases, allowing AV to be used in court once again.

Encrypting video and photography became a standard in all open and CCTV channels, and body cameras created a safety net while creating a nation of sousveillance. Deep fake videos became the newest method of online cyber bullying, and led to an increase in teenage and pre-teen suicide rates. Body cameras began popping up in high schools.

In 2035, Bubb, the first successful children’ts body camera, created a PSA-compaign which helped parents talk to their children about privacy and identity protection. In addition to allowing parents to check in on their kids throughout the day, Bubb provided peace of mind to parents that children would be able to maintain their innocence and avoid falling prey to deep fake editing.

Research on this topic:

--

--