AI Image Generation Can Fool You

The faces staring back at you may be completely made up.

ETEKLY
Etekly
4 min readJun 8, 2020

--

Source: Mashable

Pictured above are Ruben and Gloria Ruiz, a couple living in Belmont, CA. He’s a transport engineer working for the city; she teaches nursing at the local community college. They’re in their 60s now, and have two adult children: Marco and Christian.

Except all of what you’ve just read is a lie. The two faces you see in the picture above? They’re not of real people at all. They’re entirely fake, computer-generated imagery published in a stunning new report from the Nvidia Corporation.

In 2014, a study from the University of Oxford created waves when researchers published the results of their AI-based image generation software which could, among other things, generate numbers which appeared to be written by a human hand, crude representations of landscapes and animals, and, most eerily, blurry representations of rather lifelike human faces.In some cases these digitally rendered faces looked like the stuff of nightmares.

Source: Mashable

However, in other cases they appeared remarkably lifelike. The fact that artificial intelligence could create such realistic faces not based on one person or another, but from simply having learned general physical patterns and elements of faces, was a remarkable leap forward at the time. But now, with this new study, that research looks archaic.

How Far We’ve Come

Source: Mashable

Nvidia, a company known for designing graphic processing units, began making waves in AI towards the end of 2017 when they published the results for their new image generation technology. Visually, the results varied from photo-realism to bad Photoshop jobs.

Nvidia’s methodology is made up of, and arguably the next step in, what’s called a General Adversarial Network, or GAN. Instead of a single machine learning algorithm receiving data, evaluating it, and producing a result, GAN places two neural networks in a zero-sum competition with one another. Basically, one creates and the other discerns. The one that creates tries to trick that which discerns into thinking its creation is real. And because they both learn and improve, the results continue to get better. GAN works because, it turns out, competition is a powerful motivator not just for humans, but also for robots.

In just a few short years, we’ve gone from blurry black and white images (that seemed so stunning at the time) to Nvidia’s updated, GAN-driven test results that have produced digitally-rendered faces that are just about indistinguishable to the human eye.

Source: Mashable

Looking Ourselves in the Eyes

Because Nvidia is so associated with gaming components the idea of their involvement in image generation excites a lot of people. Should this level of image generation become available in gaming, imagine how lifelike our games could become. Imagine your PlayStation games now, except all of the imagery looking totally photo-realistic. The prospect is tantalizing.

Of course, as with all technology, there’s a darker side to it. The ability to totally fake a human image is powerful. Used maliciously, it can have far-reaching consequences for those falsely represented with people in the public eye becoming incredibly vulnerable targets. Abused, this technology could plant a seed of distrust in AI and a desire to stop its development for the safety of society.

What happens when you can make a face say or do anything, look like anything, or be anywhere? What happens when we lose our ability to trust that an image which looks like a human is actually of a real human?

The good news is that engineers working on this technology are not only aware of these potential issues, but share the same concerns and are actively working on apps and plug-ins that would flag potential fake images in real time. It may not solve every potential problem, but there’s comfort in knowing that GAN developers aren’t just interested in making new toys, but in doing so responsibly.

This story was originally written by Nathaniel Nelson and published in Etekly. Nathaniel is a writer and podcast producer based in New York City. He writes the internationally top-ranked “Malicious Life” podcast on iTunes, hosts programs on SCADA security and blockchain, and contributes to tech websites.

--

--

ETEKLY
Etekly

We write about how tech impacts the human experience.