Deepfakes are becoming a bigger issue
Efforts by the Big techs are in full swing to create deepfake detection tools but will they succeed?
Synthetic media generated by AI and represents another dark side of technology and the issue. A deepfake is created by pitting two computer programs against each other — which are called Generative Adversarial Networks, or GANs.
Although this form of Artificial Intelligence has been in the spotlight for the most part of 2019, the biggest news that blew the issue wide open was a LinkedIn user by the name Katie Jones, who appeared on the platform & started connecting with the Who’s Who of the political elite in Washington DC.
The ease with which deep learning created a real-life image of a person & then penetrated the social media was alarming not just for lawmakers & regulators, but for the general public as well. Lawmakers are especially worried about how this can affect and/or manipulate the 2020 Presidential elections in the U.S. People falling prey to misinformation can greatly jeopardize the transparency of the democratic process.
Over the past year, these GANs have become so good at synthesizing media that people might not able able to distinguish real from fake soon. Experts, therefore, have their work cut out as they try to find viable solutions to this conundrum. The U.S lawmakers met the tech giants like Google & Facebook over the summer to discuss the effects of this disruptive & deceptive technology and more importantly to find better ways to detect these deepfakes.
Ever since the big techs have gotten to work to combat this problem, by making their own set of deepfakes. Facebook is teaming up with Microsoft and seven academic institutions in the U.S for a ‘Deepfake Detection Challenge’ — meant to create technology to detect deepfakes, the contest is expected to run from late 2019 to 2020.
However, training an AI algorithm to single out deepfakes requires it to be trained on massive amounts of datasets. The social media giant has therefore decided to create a database of doctored videos by collaborating with paid, consenting actors to create improved tools that can effectively combat this threat.
Working on similar lines, Google roped in 28 actors to release a huge database of deepfake videos. These 3,000 AI-generated videos have been made available using various publicly available algorithms. The open-source database has been created to accelerate the efforts towards creating deepfake detection tools.
Earlier this year, an academic team led by a researcher from the Technical University of Munich created a similar database of 1,000 compiled YouTube videos dubbed as FaceForensics++. The idea behind all these efforts is the same — to create large samples of data to train and test automated deepfake detection tools.
This is all fine, but the problem arises once a tool has been developed to exploit drawbacks in the deepfake generation algorithm. The latter can be easily updated to fix the flaw and beat the detection tool next time.
The dishing out of deepfake resources, in the meanwhile, continues unabated. A company Icons8 has launched a website that contains 100,000 AI-generated faces to anyone that can use them — royalty-free. The team behind the project is listed as a designer marketplace for icons and photographs. They eventually intend to produce an API, where new photographs can be generated based on a variety of inputs without worrying about copyright issues.
This is not the first endeavor where fake AI-generated headshots have been put online. Earlier this year, another website by the name of ThisPersonDoesNotExist.com created by Philip Wang, a software engineer at Uber came to limelight. Each time you refresh the website it creates a new real-life image of a non-existent person. Although the purpose of the website creation was the introduction to deepfake technology, resources like this one can easily be used for malicious activity.
Seems like the Cat-and-mouse game has just begun…