Hyperrealism vs. Deepfakes

JerryBui.eth
Digital Forensics Future
3 min readMay 21, 2023

--

A Dichotomy of Synthetic Media

The distinction between reality and artificiality is fuzzier than ever. Artificial intelligence and machine learning advancements have led to a fascinating but unsettling phenomenon: synthetic media that remarkably resembles real people. There are two variations of this new development: hyperrealism and deepfakes.

In the context of synthetic media, hyperrealism refers to manufactured content that closely resembles reality to the point where it is difficult to tell it apart from the genuine thing. It serves primarily as an impressive display of technological skill and is utilized for entertainment. In order to improve the viewer’s experience, lifelike characters and immersive settings are generated in hyperrealistic synthetic media for the film and video game industries. Hyperrealistic technology have been used in films like “Avatar” or video games like “The Last of Us” to push the envelope of storytelling and player immersion.

On the other side, deepfakes are a more sinister application of artificial media technology. Deepfakes are also used to produce lifelike artificial content, but their main application is less kind. Deepfakes are made using artificial intelligence (AI) techniques that use deep learning to produce videos that seem to show individuals speaking or doing things they have never done. The use of these fake media for illegal activities like fraud, libel, or harassment is therefore possible. Significant instances of deepfake usage have spurred discussions about the ethics and dangers of this technology on a global scale.

Hyperrealism and deep fakes differ fundamentally not in terms of the technology employed to produce them, but rather in terms of the purposes for which they were intended to be used.

Both use complex algorithms to create exact replicas of reality. Deepfakes, on the other hand, frequently have the intention of misleading and maybe doing harm, whereas hyperrealism attempts to delight and engage.

Both instances involve the unparalleled replication and manipulation of the truthfulness of the human experience. This brings up a whole host of moral and legal questions. For instance, how can we strike a balance between the risks of deepfakes and the possible benefits of hyperrealism in entertainment?

How can we defend people’s rights and shield them from the possible harm that dishonest deepfakes may cause? How can the producers and distributors of profound bogus content be made to answer for their actions? It will take a team effort from legal professionals, technologists, ethicists, and policymakers to provide answers to these problems.

It is crucial that we continue to be aware of the ethical ramifications of these technologies as we advance deeper into the era of synthetic media. The difficulties and possibilities given by synthetic media will definitely continue to alter our digital landscape in the years to come, whether it is the immersive worlds of hyperrealism or the potentially damaging deceptions of deepfakes.

Video link:

Jerry Bui is Managing Director of Digital Forensics within FTI Consulting’s Technology segment focused on forensic technology and risk & compliance issues (all opinions his own). Jerry is a Certified Fraud Examiner and has over 20 years of experience in digital forensics, ediscovery, automated risk assessments, dashboard compliance monitoring, and investigative analytics. Jerry’s team provides evidence acquisition, expert witness, and strategic consulting services to law firms and corporations. Connect with Jerry on LinkedIn, Twitter and TikTok.

The Digital Forensics Future (DFF) podcast is also available on the platforms below.

--

--