You Must Learn What is Real and What is Deep-faked!
“The powers that be no longer have to stifle information. They can now overload us with so much of it, there’s no way to know what’s factual or not. The ability to be an informed public is only going to worsen with advancing deep fake technology”.
With the advancements in deep-fake technology, it is now possible to create convincingly fictional imagery using digital transformation techniques. With this technology, a new identity of oneself or others may be built overnight on social media. Such alterations may wreak havoc on people’s lives, especially when deep-fakes become viral. Deep-fakes, on the other hand, are not necessarily negative; they may sometimes have a positive influence. It has had an influence on the movie business since actors are no longer required to do dangerous stunts; instead, professionals paid to perform these stunts are morphed into the image of genuine actors using this technology.
As far back as 1997, a study on computer graphics and interactive technology was released where an expert showed a new audio track to a speaker’s current video footage.
This seemingly insignificant experiment on video content alteration was most likely the initial baby steps toward what has evolved into a ‘deep fake’ technology today. It is now feasible to automate the process of facial reanimation using a machine learning system. This was a major step forward, and it resulted in the widespread adoption of deep-fake video creation throughout the world.
Like many new technologies, mass-adoption occurs a long period after the invention. In 2016, the first Face2Face software was released, demonstrating how in real-time face capture technology may be utilized to re-enact video in a realistic manner. It has created a frightening possibility of destroying many promising futures.
After the tipping point, the adoption of this technology skyrocketed. With the pervasiveness of machine learning algorithms, a slew of deep-fake movies were created and became viral across the social media channels. Case in point is the famous Obama deep-fake which became widely circulated. Social media platform, Snap-chat quickly followed suit with a development of face switch filter. With the advent of AI voice changer applications, open source deep — fake video production tools, and a plethora of deep-fake projects emerged.
The virality of deep-fake has now become unstoppable, and is likely to proliferate further. For some, applications this may be a positive thing. Generally, deep-fakes, can have huge negative consequences for our society at large. We are currently entering a new age in which artificial intelligence deep-fakes will progressively become a significant element in a range of sectors.
Many technology experts and futurologists have warned us for years that artificial intelligence (AI) and machine learning algorithms, which are the basic underlying processes of deep-fake technology, have the ability to negatively impact global communities. When we look at the broader picture, they have good reason to say so since deep-fakes are considerably worse than the well-known idea of ‘fake news.’ Deep-fakes are considerably more effective: they can turn falsehoods into foolproof truths.
People instantly recognized the increased benefit of deep-fakes and deep-fake technology when they first encountered them. This empowers many an enthusiast to dabble at any moment to build a deep-fake of anybody. It only needs a little software and a lot of processing power. Though it may appear irrational, the entertainment value of this technology, particularly when one creates deep-fakes of movie celebrities and politicians, is immense. This is unquestionably the initial point of contact for people with this technology, but real-world applications are usually considerably simpler.
How deep fake works?
There are a few processes involved in creating a face-swap video. First, hundreds of pictures of the two people’s faces are processed by an AI system known as an encoder. While compressing the pictures, the encoder identifies and learns commonalities between the two faces, reducing them to their shared common characteristics.
Following that, a decoder, a second AI system, is trained to retrieve the faces from the compressed pictures. Because the faces differ, you train one decoder to recover the first person’s face and another to retrieve the second person’s face. To execute the face swap, just feed encoded pictures into the “wrong” decoder.
According to research, the main concept behind the deep-fake is parallel training of two auto-encoders. It has been seen that raw image can have excessive dimensionality, and the auto-encoder has been used to reduce dimensionality and create compact representations of images, as shown in Figure the latent face.
For example, a compressed image of person A’s face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A. For a convincing video, this has to be done on every frame.
Decoders do not share the same weight as encoders. It is claimed that the encoder removes all of the style of one’s face, such as hair color, eye size, nose size, and so on. As a result, the only thing left of the latent face is its structure. The decoders then use a new style to bestow the latent face with new identities.
A generative adversarial network, or GAN, is another method for creating deep-fakes. Two artificial intelligence algorithms are pitted against each other in a GAN. The first algorithm, dubbed the generator, is given random noise and converts it into a picture. This synthetic picture is then added to a stream of actual photos — say, of celebrities — that are sent into the discriminator, the second algorithm. Initially, the synthetic pictures will have no resemblance with human faces. However, if the procedure is repeated many times with feedback on performance, and the discriminator and generator both improve. With enough cycles and feedback, the generator will begin creating completely lifelike faces of no one.
Technology required to make a deep-fake
It is very challenging to make a deep-fake using a standard computer. Mostly are created on high-end desktops with very powerful graphic cards or using a computing power in the cloud. This makes it the time efficient. it reduces time from days and weeks to hours. Also, it requires the expertise too. So as the deep-fake do not have flicker image or the inconsistency in the face. Plenty of tools are now available to assist people in creating deep-fakes. Several companies will create deep-fake for you and process everything in the cloud.
How to identify a deep-fake?
Identifying a deep-fake has become rather difficult as the underlying technology became more advanced. In the deep-fake creation, human face do not blink normally, according to a study carried out by some American researchers. It is no surprise that the majority of images fed to deep-fake generators show people with their eyes open, therefore the algorithms never learn about blinking. Initially it seemed to be the perfect solution for detecting deep-fake. However, soon technology overcame this issue and deep-fakes began to appear with blinking eyes.
Deep-fakes of poor quality are easier to detect. Either Lip syncing will be improper, or the skin tone could be uneven.
Also, flickering around the edges of transposed faces is possible. Fine details, such as hair, are especially difficult for deep-fakes to render well, particularly where strands are visible on the fringe. Poorly rendered jewelry and teeth, as well as unusual lighting effects such as inconsistent illumination and iris reflections, can be found out as well. Government, Universities and Technical Research Center’s are consistently working on the detecting of deep-fakes. Similarly, Microsoft, Facebook, and such other large companies are also taking part in this deep-fake detection challenge.
Pros and cons of deep-fake
Every technology has pros and cons, similarly deep-fake technology also has pros and cons. Some of them are listed below: -
Deep-fake technology was used to bring actor Peter Cushing back to “life” for 2016’s Rogue One: A Star Wars Story, but the technique may also be used for a variety of other beneficial artistic applications. These include the ability to go back and modify the dialogue of a video or movie without having to reshoot it, as well as the ability to create whole films by picking from a menu of presenters and entering the screenplay.
A UK-based organization utilized deep-fake technology last year to make a video of David Beckham giving an anti-malaria message in nine different languages.
WPP, a marketing firm, developed corporate training films that employed AI to construct a presenter that could speak the recipient’s language and greet them by name.
Some Russian researchers utilized the technology to bring the Mona Lisa to life, generating a film in which she moves her eyes, head, and mouth. And, while deep-fake technology poses a significant problem in the battle against fake news, it has also been utilized to generate presenter-led news broadcasts personalized to specific viewers.
- Easy Customization
We may now use an app to construct a 3D structure of ourselves and change the color, style, and clothes of our hair, as well as try on numerous outfits. All with the help of deep-fake technology. Creating a virtual space for fashion designers to quickly customize and trial on clothing without having to create them from scratch.
Let us start with the simplest. There are several reasons to be afraid of a technology that can make anyone appear to be doing or saying anything. Assume you are watching the nightly news and stumble across a press conference by the Prime Minister inciting violence. But entire thing was a hoax. The Prime Minister might deny it, but how do you know it isn’t a deep-fake as well? What criteria do you use to decide what to believe?
When American researchers released a paper in 2017 explaining how they generated a false video of President Barack Obama, they illuminated on the possible dangers of generative technologies. Google CEO Mark Zuckerberg was also the target of a deep-fake video in which he appeared to attribute the social network’s success to a clandestine group.
Deepnudes, a service that allows users to superimpose anyone’s head over pornographic video, has recently been suggested. The technology is currently accessible, even though the site’s debut was canceled.
Another source of worry is financial fraud. Audio deep-fakes have previously been used to clone voices and trick listeners into believing they are conversing with someone they know. Scammers used a deep-fake of a tech CEO’s voice earlier this year to try to convince a company employee to transfer money to the scammer’s account. This is not the first time scammers have used the same trick to defraud a company. Earlier this year, scammers created a deep-fake of a tech CEO’s voice to try to persuade a firm employee to send money to the scammer’s account. This isn’t the first time con artists have used the same tactic to swindle a business.
Deep-fakes are terrifying, and they have struck a chord with a lot of people, especially those who have been harmed by this smart technology.
There is no one rule or regulation that can prevent the use of open-source software solutions that are presently available. There is no legislation that can stop a person with a clear goal who has nothing to lose. Deep-fakes have here, and they want to stay. Deep-fakes are here and they are here to stay.
The message is clear, mitigation actions must be prioritized. And we must learn to live in a culture where everything social media or conventional media tells us might be entirely untrue. As a result, we must keep a constant critical awareness of our surroundings as people. It is the only way to attempt to survive in a world where pretty much any media outlet may lie to you without you ever being able to check whether what they say is accurate. There is a fascinating era ahead in which deep-fake technology’s emergence make take some unforeseen turns.