This Person Does Not Exist

Adventures in Fiction

Barry Leybovich
The Startup
6 min readNov 12, 2020

--

This is Katherine Watkins. Last seen in 2016, she was one of Minnesota’s top fencers. Her disappearance was widely publicized and has been the subject of many theories. The most popular, according to the Hennepin County Sheriff’s Office, is that she was murdered.
Watkins was a member of the University of Minnesota’s fencing team. Her father, who is a local police officer, said his daughter was in good physical condition.

And this is Aleksei Borgovsky. Formerly a chemistry professor in the Soviet Union, Borgovsky moved to Minnesota in 2015, where he became a high school teacher. No one suspected that he was directing the Russian government’s propaganda.
Borgovsky’s work with the Russian government is traced back to the early to mid 20th century, when he was a professor at the Moscow State University. The “Memoirs of a Russian Agent” refers to his time as a “cultured and educated amoralist.”

You can see the makings of a 60 Minutes episode here, a mystery of how two worlds collided, and one disappeared. However, This Person Does Not Exist won’t be the forthcoming story about this dastardly tale, rather the title is very literal. Katherine Watkins does not exist, and neither does Aleksei Borgovsky.

I can hear you thinking ‘This is what fiction is Barry, you’re just explaining fiction’, and you’re right. But this is a new and weirder type of fiction. First, take the photos above. Well, not photos exactly — these likenesses are pure fiction: generated by AI to appear to be a perfectly normal person. Go ahead, scroll up and look at them again. These likenesses were created by a generative adversarial network called StyleGAN2 developed by Nvidia Research — you can check it out yourself by going to ThisPersonDoesNotExist.com. Every time you refresh the page, a completely new, never-before-seen likeness of a person is generated. While sometimes the results may be comically bad, the majority of the time they are scarily good.

Unfortunately the anti-social uses of this technology seem vast. Let’s dive right into the deep end here, and reflect on a hard question:

Can something be child sexual abuse material (see here for why we shouldn’t call it pornography) if it doesn’t depict a child who exists, but instead depicts an AI-generated child?

I really want us to pause and think about this. We’re obviously not going to solve pedophilia here (though I recommend everyone reads this piece from The Atlantic). On one hand some people believe this technology can be a ‘safe’ way for pedophiles to manage their urges, and reduce the urge to consume ‘real’ CSAM (I am using real in quotes here as I am not convinced that AI-generated material should not also be categorized as CSAM), the creation of which necessitates the sexual abuse of children. On the other hand, there’s a slippery slope argument that consumption of any imagery — even AI-generated — may lead to acting on pedophilic urges, not to mention make the prosecution of possession of ‘real’ CSAM difficult; this is particularly true if it becomes hard to discriminate between CSAM which came from the sexual abuse of a child, and AI-generated CSAM.

This technology also powers the creations of deepfakes — the creation or alteration of a real person’s likeness, often to portray events. While deepfakes is poised to cause extreme problems with misinformation and CSAM, it is currently used most insidiously in revenge porn: the release of sexually explicit material — real or fabricated — of a person (often a former partner, hence ‘revenge’) without their consent. Legally this is termed ‘nonconsensual pornography’ in the United States but I believe it would be better termed as ‘image-based sexual abuse’ as it is today in Australia as I believe using the term ‘pornography’ creates an association with blaming the victim. Unfortunately, in the United States there has not yet been national action against image-based sexual abuse or deepfakes more broadly. While the majority of states have addressed this individually, New York unfortunately is not one of them (New Yorkers, you can find out to your state assembly member here, and write them).

Source: Reductress

There are sure to be positive uses for this technology — better and more realistic video games, unlocking a new medium for storytelling in art and film, and as a very edge case: avoiding negative stigma in cases like that of James Murtagh, the guy who became the stock photo for Reductress’s asshole template, who sometimes regrets doing that photoshoot. Okay maybe that’s a stretch, but I really wanted to fit that in here.

And in the realm of storytelling, generative AI has lots to give! In fact, most of the story of Katherine and Aleksei above was written by AI. Keen observers surely spotted the errant italics at the top: that is text that I wrote, and fed into an AI algorithm, which completed the rest. These days, the GPT-3 (which stands for generative pre-trained transformer) model from OpenAI is the gold standard in generalized text generation. It was trained on nearly 500 billion snippets of texts (tokens), and it’s capabilities are impressive — not to mention scary. It can complete Shakespeare sonnets, and generate amazing emails, it can ghost-write articles and can even help manage financial statements (below).

And while there is a lot that GPT-3 cannot do yet, technology is alarmingly accessible, and with a bit of human curation can easily lead to an explosion in content farms that will slowly drown out independent content production on the web. And while headlines like “This one skincare secret has doctors screaming” might not get most people, the rise in troll farms that actively weaponize misinformation and social media is a very present danger. If you were worried about interference from bad actors in the 2016 election, imagine what 2024 will look like, where these bad actors are no longer limited to how much content humans can write, and they have AI to leverage (which I am sure is in use already). Let’s not even dive in yet on the massive biases GPT-3 is likely to have learned from its training.

To create the above stories, I fed the italicized text into Contentyze — which does not use GPT-3 but claims to use a similar AI algorithm — and let it autocomplete the rest. It took a few tries to get something I wanted, but in a few clicks I had an awfully good hook for my story. Even AI thinks former Soviet professors are also spies. I am looking forward to a future when authors can work symbiotically with AI to get past writer’s block, or create new worlds that push the seams of our imaginations. But to get there, we are going to have to do a lot of work on attribution and trust. I’m looking forward to it.

There’s a ton of information out there on deepfakes and the world we’re entering. If you’ve learned anything today, hopefully it’s to look for trusted sources. One of my favorites is from our friends over at Brave New Planet, a new podcast from Pushkin Industries. Check it out here:

--

--

Barry Leybovich
The Startup

Product Manager, Technology Enthusiast, Human Being; Contributor to Towards Data Science, PS I Love You, The Startup, and more. Check out my pub Life with Barry