How Fake News Detection Can Be Improved?

Dominik Mate Kovacs
7 min readDec 22, 2020

--

The struggle of building a product solely based on Machine Learning and the difficulties of Explainable AI within detection algorithms.

Graphic by unDraw

In today’s society, fake news is rapidly being produced as a result of information overload and technological tools that enable more advanced manipulations than ever before. To counter the massive amount of content coming into news and media companies, solutions for automated fake news detection must exist. Fake news can be either be visual or linguistic-based. In this article, I am going to focus on visual fake news detection, which includes videos, images, and other types of visual media content.

My name is Dominik Mate Kovacs, Co-Founder of Defudger a startup dedicated to being the industry-leading visual content validation system. In my previous article, I presented the stance of the media market towards products against disinformation. Find the story here.

Below I am going to explain the current stage of fake news detection from the technological side. I will mention the greatest challenges we faced when trying to build an automated detection system using machine learning. Furthermore, I will introduce topics like explainable AI and the robustness of existing solutions within the field.

Lack of data

The concept of fake content detection sounds brilliant in theory. However, in practice, the most obvious design criterion is the ability of an algorithm to detect all tampered content regardless of their compression rate, origin, format, or length. Sadly, methods are far from this right now.

It is nearly impossible to measure a model’s performance without having truly representative datasets.

There are plenty of image manipulation techniques ranging from splicing to removal. In the video domain, deepfakes can take many forms, check the plot below:

Most advanced media manipulations by Defudger

Assembling representative datasets for all these types of forgeries requires tremendous effort and resources. Furthermore, most of them are evolving from time to time, so a dataset that is 1–2 years old might become outdated fairly quickly. This is an enormous obstacle when trying to build a machine learning detection system.

Expectations for a robust model

If there are no highly accurate models due to the minimal amount of data, false positives are inevitable. However, depending on the use case their effect varies drastically. When we deployed our Defudger Webapp for journalists as a tool for digital forensics, I assumed it was fine having a human in the loop. It is always better to get false alerts than some unnoticed fakes in the database. At least that’s what I thought, but it got revealed that there were higher expectations. People working in the media hoped that such a system will be able to filter content automatically. Most companies who did not have the personnel for fact-checking, could not afford to place a person in charge of taking care of the false positives.

Besides the above mentioned, speed of the inference is also significant. A robust detection model is fast and lightweight. However, we mostly used deep learning models which required a complicated backend architecture with Kubernetes and scalable nodes with GPU support. Working with computationally heavy algorithms results in a significant delay for the end user. This is a huge bottleneck that must be accepted if we are working with complicated models.

Runtime is an essential factor in this field, eventually when building algorithms we must strive for efficiency to make our end-product as user-friendly as possible.

By offering an API for automatic validation for each incoming content, a smoother user experience was achieved. When journalists were looking at the visual content, our results were already ready to be presented. This can be considered as a solution to runtime expectations.

Explainability

To trust a detection system, we need to understand it first. The ideal scenario would be to display the actual factors that contributed to the final result. However, when deep learning models are used, it is very challenging. These models consist of millions of simple calculations, so unless they can be described in a human interpretable way, they will be considered as black boxes. Therefore, Explainable AI in computer vision must be a research priority to make the results of these algorithms interpretable not only for their developers but their users too. Imagine a situation when there is a video used as evidence in a courtroom. It is not comforting enough that an algorithm, that nobody understands how it works, decided upon a person’s life.

For explainability, we implemented a percentage of fakeness and a heatmap as a result of image detection models. Heatmaps provide a better explainability than binary outcomes whether an image is fake or not. However, there must be further effort put into identifying the potential manipulation technique correctly. The user should start the investigation based on the heatmap and categorical understanding of the possible forgery. If I am told to look for cloning in the image, I can start my investigation more effectively.

Infamous Iranian forged rockets, widely known case for cloning

There must be both categorical differentiation between manipulations and localization of the forgery to achieve more eminent explainability.

When it comes to deepfake and video detection, I have not seen many algorithms with heatmap results. Explainability-wise, it is not an appropriate design for an algorithm to only have binary outputs. Therefore, localization methods were implemented for Defudger’s deepfake detection models as well.

Passive detection vs Active detection

The digital forensics field consists of two major areas which are passive and active detection. The major difference is whether there is prior information about the content.

Difference between active and passive media forensics by GeeksforGeeks

Firstly, passive detection uses the image with no prior information for accessing its integrity. By creating algorithms that check for any underlying inconsistencies, it is possible to detect a forged image that has no visual trace of tampering. Thus, even though no physical footprints are seen on an image, there could be deviations within the metadata, quantization, color values, et cetera. Real-life examples of passive detection include on-demand media forensics where the user can upload any media file to verify its validity.

Secondly, active detection relies on a method where an authentication code is embedded in the image at the time of generation. If an image is captured by a camera software on a smartphone, it would be possible to leave traces in the image or hash all the details to a secure database system (e.g.: Blockchain). If this image appears later, it is possible to verify its originality by comparing it to the existing elements in the database.

Examining the two methods, it can be concluded that it is really difficult to choose a superior technique. For passive detection, the proposed system has a really hard time when it is benchmarked against many types of unknown content. The major disadvantage for active detection is the idea to hash or fingerprint all media content on-capture, no matter the device (there are already initiatives to solve this problem with a new standard). They both have benefits, but if the primary goal is accuracy and a trustworthy detection system then active detection should be the way to go.

Adversarial attacks

A rising threat against deep learning-based classifiers are adversarial attacks. A 2020 Neekhara et al. concluded that both white-box and black-box attack scenarios can fool deep learning-based detectors into classifying fake videos as reals. It is well known, that neural nets are vulnerable to such attacks in general and it still has been ignored in all existing deepfake detectors. During their experiments, they managed to fool the then state-of-the-art deepfake detector with a success rate of over 99%. It is inevitable to prepare our algorithms against such threats.

Adversarial example in an animal classifier by Explaining and Harnessing Adversarial Examples, Goodfellow et al, ICLR 2015.

At Defudger, we proposed a state-of-the-art defense system for our detection algorithms in collaboration with researchers. Eventually, our models became robust against white-box and black-box adversarial attacks. All this could be achieved with minimal training data, so as of now our whole visual content detection system is well-armed against this threat.

A complex solution that can be used by the industry is not
that far away. Superior technology is just one building block of an ideal product. For investigative use cases, it would be enough to wrap the improved and more explainable versions of today’s best detector algorithms that can filter out at least 75% of fake content. For automatic decision making integrated into the backend of a platform, still greater improvements are needed especially regarding the speed and a low false positive rate. I definitely think machines will not replace humans entirely within fact-checking in the near future. They can only supplement us.

Viable products are under development, however even if one hits the market, due to the cat-and-mouse game nature of the field it may get obsolete in a matter of months. Therefore, constant R&D is required to keep the pace up with the generators.

Eventually, by combining active and passive detection a more comprehensive and secure solution can be constructed. I see the future of digital forensics going in this direction.

Thanks for reading about my journey within the field of disinformation. If you are interested about future content within deepfakes, synthetic media and fake news follow me here on Medium or lets connect on LinkedIn.

Dominik Mate Kovacs leads product development and technological research at Defudger and Colossyan. At Defudger, research is pursued with world-class institutions in fake news detection. Outcomes are built into our products for the benefit of society. At Colossyan, our aim is to challenge the status quo, offering highly scalable video generation for businesses and organizations in a fully ethical manner.

--

--

Dominik Mate Kovacs

I work with fake news detection & synthetic media at Defudger and Colossyan. My goal is to ensure technology is not misused, and it brings the most benefits.