Why is it so hard to get rid of deepfakes?

The troubles of detecting and regulating novel technology

Kingston Kuan
Predict
5 min readMay 7, 2022

--

Photo by Sander Sammy on Unsplash

On April 17, 2018, the world was greeted by Former President Barack Obama saying outrageous things in a video titled “You Won’t Believe What Obama Says In This Video!”. About 30 seconds in, it is revealed that is a deepfake allowing the famous comedian Jordan Peele to use the face and voice of “Obama”. As technology advances further, higher forms of media manipulation such as video and sound have been made possible and must be addressed quickly.

Former President Barack Obama deepfake by Jordan Peele

Deepfake technology can produce fake content indistinguishable from real content by the human eye. This technology has risen in popularity due to abuse in sensational media by the spread of misinformation and infringement of privacy. However, not only is it impossible to eradicate these misuses completely but the core technology also has other benefits and thus does not warrant an outright ban of its applications. Furthermore, regardless of the benefits or detriments, the advent of deepfake technology means media regulations must keep up before widespread use is commonplace. While technology is being developed to definitively identify Deepfakes, new regulations should be put in place to cater to their uses. Specialized categories should be set up on platforms to ensure users comply with the rules and technical agencies must be set up for appropriate policing.

Photo by Jorge Franganillo on Unsplash

Manual manipulation of images and videos is neither new nor rare and the war on misinformation has been fought on many fronts. Ranging from digital editing in marketing to false evidence in court, a big hurdle is being faced now with deepfakes being created by artificial intelligence. Yet, it is believed that the very technology capable of detecting deepfakes can also be used to improve them and thus even harder to detect. In a battle of computing power and datasets, it is imperative to not only support the detection but be able to keep this away from nefarious users. Yet the development of such powerful technology has traditionally been open source and only possible with the collaboration of many individuals made possible by the interconnectivity of the web. It is not reasonable to rely on such teams to be able to securely store their algorithms. It is better to instead consider a hybrid approach to development where the project can be managed by trusted parties such as a government body or large private institution while still allowing for open contributions.

Photo by Adem AY on Unsplash

Technology evolves rapidly and where public regulations often fail to keep up, fast-moving private bodies are well-suited for this role. Given that the open-source nature of artificial intelligence development is unlikely to change significantly, regulations prove even more crucial than combative technology. Media platforms, such as Youtube or Facebook, are often in the most appropriate position to regulate deepfake technology. Not only do they have access to large datasets, but they have already been dealing with similar hostile uses of their platforms and have amassed many skilled employees. The power given to these privately owned platforms by the internet has already been immense, being able to affect economics, politics, and societies on a large scale. Indeed, with this should come the responsibility to police their own platforms and be punished for not investing heavily in countermeasures.

Current measures to combat deepfakes have been to put a lid on them rather than attempts at integration. Yet, completely erasing deepfakes is a fruitless endeavor. It would more productive to embrace deepfakes with their own category on these platforms for creative uses. In this manner, the technology may still develop while being healthily regulated. By complying with ethical and legal standards, personal privacy and commercial rights can be protected more smoothly. Desensitisation will also play an essential role in aiding the public with understanding deepfake technology which helps them to identify and report illegal activities. With the stigma behind deepfakes shifted away from sensational topics such as politics and obscene material, the research behind combative technologies also gains an edge in terms of financial and technical support.

Photo by Tingey Injury Law Firm on Unsplash

Existing agencies still play important roles in attempting to regulate deepfakes. However, their current capabilities are highly restricted. A new agency would be more suited to dealing with the impending threat of deepfakes. This agency must be technically equipped to deal with deepfakes and must also be able to administer appropriate penalties to rule breakers. As the internet has no borders, an international agency, like what the World Health Organisation is to healthcare, needs to create general guidelines that can thereafter be tailored to individual countries’ needs. This body should punish individual agencies for not being able to properly police within their jurisdiction.

On the other hand, it is worth analyzing the deepfake situation from a social perspective. We need to understand the reasons behind the creation of deepfakes and their uses. As advanced as technology gets, even though the intelligence being used is artificial, there is always a human aspect to how and when it is used. By focusing on the community and their technological literacy, malicious use is reduced, and misconceptions corrected. As a remedy, these measures are hard to quantify but similarly, it is known that better education contributes to a lower crime rate. Targeting the root behind the misuse of deepfakes also allows greater freedom in its research which has wider benefits than just entertainment.

In conclusion, deepfakes are very much a part of modern technology and therefore difficult to eradicate. Governments and platforms have important roles to play in contributing to the development of technology capable of identifying deepfakes. As fighting solely on the technical front is insufficient, laws and regulations must attempt to pull ahead by catering to expected growth in deepfakes with deliberate exposure, the appointment of responsibility, and education. If left unchecked, deepfakes may begin to rule all media and the distortion of reality may become the rule rather than the exception.

--

--