MLearning.ai
Published in

MLearning.ai

Generative Everything: The technology that brought you DeepFakes soon brings you

Like any major technology, AI can be used as a tool or a weapon. More ethical guardrails are needed, but most technologists believe that the number of people benefiting from beneficial tools will outweigh the negative impact of those tools being weaponized.

One of the technologies squarely in the crosshairs of that debate is generative adversarial networks (GANs). These technologies have gained popularity recently as tool used to create “deepfake” videos showing realistic of celebrities and political figures doing things they did not. These videos are close enough to reality that they can be confused for the real person, and soon enough only an AI will be able to identify what is a deepfake or not.

Tell me if you can discern if this is a picture of a real person… https://thispersondoesnotexist.com/

Given the convincing authenticity of the output and the risk of negative use cases, it is clear that this an area where more regulation, oversight, and AI ethics focus. Work is needed to ensure the fast paced development of technology lines up with our best human interests.

However, like other technologies early in their life cycle, the malicious use cases of GANs have gotten the most press and I believe the positive use-cases will vastly outnumbered the negative ones as the technology improves. We are starting to see great examples of this already in applications that are fun and can even help make video calls better. Let’s review some of those.

A quick overview of GANs

For a simple overview of GANS — GANs 101. So, what does GAN stand for? Gaming All… | by Sharan Babu | Medium

For a more detailed intro — Generative Adversarial Networks, a gentle introduction — MachineCurve

My summary: Generative adversarial networks are a type of unsupervised AI model in which two neural networks compete against one another to create a result. These networks learn to generate realistic images, text, and language through competition.

In essence, one network (the generator) tries to trick the other by creating realistic data which the other network (the discriminator) tries to determine (label) as real or fake. Back and forth they go until the generator network is really good at creating _______ [insert cool thing here].

Meta-benefit = widespread use of GAN models has created general demand and skills in 2 awesome areas of the AI industry: 1.) use of NOISE and 2.)SYNTHETIC DATA.

Noise is important because introducing fake noisy data in your models can help de-bias a model and increase it’s robustness. Ex. if the outputs of your model starting to look too much like an over-represented segment of the population… add more noise.

Using more synthetic data is great because it is WAY cheaper than paying for labeled data.

The better angels of GAN models:

I believe GAN models will soon lose their brand as “only used for deepfakes” , as these technologies will also become the best protection we have against malicious use of generative models. On top of this, the beneficial and benign use cases for GANs will continue to grow rapidly.

Some examples:

  • Fun — CycleGAN can transform reality.
GitHub — junyanz/CycleGAN: Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.

NVIDIA Uses AI to Slash Bandwidth on Video Calls (petapixel.co

Bonus — when you skip putting your video on because you haven’t combed your hair. A generative model could create a pseudo-realistic avatar of you which has showered or put on a decent shirt. This avatar could also lead recorded trainings based on written scripts (translated globally) or teach classes, saving time of video editing or multiple shoots.

Less adversarial

The risks of this tech is obvious, but the solution is stronger governance and regulation on GAN models. If an AI model is the only thing powerful enough to identify if a video is a deepfake, we create an arms race between the most accurate watchdog AI vs. malicious actors. Some good news is that — similar to cybersecurity — some of the largest organizations with access to supercomputing and AI capability share our interest in reducing deepfakes. Also, over the long-term an arms race in compute benefits larger organizations over smaller malicious actors. However, this will not be enough without thoughtful governance over these technologies.

Despite these risks, I believe the beneficial uses of GANs will outweigh the malicious ones. These technologies can and will work within governance guardrails that keep humans safe.

Personally, I look forward to being able to sign into Teams and speak as my avatar from a remote ski resort with bad wifi. Though my coworkers probably don’t…

--

--

--

Data Scientists must think like an artist when finding a solution when creating a piece of code. ⚪️ Artists enjoy working on interesting problems, even if there is no obvious answer ⚪️ linktr.ee/mlearning 🔵 Follow to join our 18K+ Unique DAILY Readers 🟠

Recommended from Medium

What Would You Do With an AI Life Coach?

Rise of Chatbots in Customer Service

My Experiences with the Robot Overlords

How 22 Years of AI Superiority Changed Chess

AI News Roundup — November 2020

The Evolving Impact of Artificial Intelligence on Strategic Risk Management and Governance

How RPG Logic and Simulations are Changing the Game in Healthcare

Inevitable Coexistence: Learning to Live with AI

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
David Hall

David Hall

Miami University Alum. Microsoft - Finance & Accounting.

More from Medium

The Great Debate: Tim from 2020 vs Tim from 2022 on whether AI will kill us all

How AI is Assisting Scientists in Their Space Exploration?

A.I. and Covid-19

The Intelligence Standard