The horrifying reality behind AI porn

By: Katherine Poling

Dante Estrada
Spotlight
9 min readJun 24, 2024

--

Courier illustration using assets from Canva Pro by Katherine Poling

One loud text notification dings, quickly followed by another and another. A message reading “Are you a porn star?” stares back at them. A flashbang goes off inside their brain, and all at once, every thought inside their head disappears into nothingness. Then, just as quickly as every thought had disappeared, they came flooding back in full force. Scrambling to reply and struggling to understand the cryptic message lighting up their screen, they call their closest friends and beg them to click the ominous link, staring back at them, unable to get their hovering fingers to move. In a matter of moments, their entire world as they knew it had changed.

Flashing in front of their eyes are familiar photos of their face plastered onto the naked frame of a stranger, a vulgar title screaming at them from above this hardly recognizable person, causing their body to go numb and their brain to go blank. They recognize the distinct traits of their face, but on a body that is foreign to them, with a blaring caption reading “cheerleader gets fucked.” Seeing their name and face connected to something they could hardly comprehend, something they would have never thought possible even a minute before, caused their body to shut down completely.

All they could do was stare in disbelief as they continued to scroll through the profile. A pit in their stomach formed as they eventually came across photos of themselves that they actually recognized, photos taken of them on Pasadena City College grounds only a few weeks earlier. The only thought they are able to hold in the empty abyss of their brain is, “Who could do this to me?”

A first-year childhood development student at PCC, who wishes to remain anonymous, said they came to campus every day feeling comfortable and confident in their well-being. They had never considered that while getting their education, they’d have to fear their safety being violated by their peers, nevermind, their friends. What was once a safe space and place for community building now quickly became a reminder that nowhere and no one is safe in the modern age of technology.

“We had been friends since second grade. I’ve known him like my whole life; he met the majority of my family, and I have this really big friend group that he was part of. We had always seemed more like family […] It was definitely shocking to find out.”

Quickly after making this earth-shattering discovery, they were determined to rally together the several other victims they managed to recognize, many of them PCC students as well, and report the incident to the Pasadena police department. While in the process of making their reports, they were told by the officers that they likely wouldn’t be able to do anything in light of the link no longer being active, as well as a lack of evidence since the victims had only taken a couple screenshots of the site before it was deactivated. The group was merely advised by the officers to keep all of their social media accounts private and to stop befriending the boy.

Despite their anxiety surrounding another potential disappointment, they decided to report the incident to Title IX in hopes of seeing some action taken against the man who violated them in what should have been a safe space for equal education. It took nearly six months of having to fear possibly running into him on campus and wondering if they’d ever see any justice, but eventually, he admitted to his involvement in the creation of the deepfakes to Title IX officials.

“There was no trial because the investigator that was assigned to our case met with him, and I think within the first minute he admitted that he did it and explained how he did it,” they said. “They didn’t tell me what his punishment would be, but they just gave me examples of what consequences there could possibly be. They were saying it could be a suspension, or it could be like a class that he has to take on why what he did was wrong, and then there was something else too, but um, yeah, they didn’t officially say what the consequence would be.”

Although ultimately grateful for the resolution PCC came to, they oftentimes feared that the case would never reach a viable solution in a timely manner, considering the lack of communication between Title IX and the victims. It took nearly six months of waiting around and wondering if making their report would have any impact before they were informed of his admitted guilt. They are grateful to know that he will see him receive some form of punishment, but they say they are still unaware of what disciplinary actions PCC will take against him, having only been told a limited amount in regards to the resolution. Despite the sense of relief knowing that there is a conclusion, they couldn’t help but hope for more inclusion regarding his punishment, especially after nearly half a year of envisioning a future in which they never see justice.

“There was a time like, I want to say it was like two months in and I didn’t really hear much,” they said. “I met with the investigators and with the Title IX person and then they just never updated me, but I do understand that they would try to set interviews for the other people. Like witnesses and stuff like that. There was a point where I was like, is it even worth it? Like, should I just call it off? Will anything actually be done? But um, once they did get to meet with everyone, I think it was around two weeks after where they were like, ‘oh, like he admitted it.’ Then they told me we can go through with a resolution. Overall, I guess it was pretty fast because I’m surprised that all it did happen within the six months.”

When reaching out to PCC’s Title IX administrators for a comment on what punishment he’ll face for his sexual misconduct they didn’t respond to any emails asking for an interview. It was made nearly impossible to communicate with anyone in Title IX after the recent departure of Title IX coordinator, Megan Staudenraus. After multiple weeks of seeking out an interview to finally get a solid answer on the situation, a PCC spokesman said that the new Interim Title IX Coordinator, Dr. Kari Bolen, would be the only one who would be able to speak on the matter. While looking to interview about a separate article, it took Dr. Bolen two weeks to respond to my question, yet she never addressed my multiple inquiries to interview her about the deepfake porn that was made on campus.

“Honestly, when I reported it, I did want him to get expelled or something because it was not only me he took photos of on campus, there were tons of other girls,” the victim said. “I was just the only one that they had evidence of. I thought, okay, some evidence is better than nothing. I was still hoping that it was enough that he wouldn’t go to school anymore and he’d face expulsion but once we got further along. I realized that PCC probably wouldn’t really do that. I know they’re all about giving people a chance and stuff like that. Just like I’m not really sure if they do that. But yeah, I did hope for that. But suspension is better than nothing, I guess.”

While the majority of deepfakes often use the faces of celebrities and politicians, 6% of deepfakes use the likeness of private citizens, with that number only increasing as deepfake tools become more accessible to the general public. It raises the question, “What can we do to protect ourselves from pornography being made of us if we can be fully clothed and still be violated by strangers?”

In 2023, a study found that 95,820 videos of AI pornography were circulating on the internet at the time, a 550% increase from the previous 2019 study. Deeptrace, a company that creates tools to track synthetic media, found that 96% of all deepfakes created by AI are pornographic in nature and that on these AI porn websites, 99% of the deepfakes are of women.

As Sora AI announces the release of its new extremely high-quality generative video AI model that is anticipated to be accessible to the public in only a few months, many people worry about the effects this could have on the ever-growing amount of deepfakes popping up throughout the internet. Many students have taken this opportunity to use AI for personal advantages, while other students have already begun to use AI technology for entirely different reasons– for the betterment of their communities.

A first-year data science student at PCC, Mohammad Shirmohammadi, has had first-hand experience with the tricky ethical lines of AI and worries that AI companies aren’t doing enough to ensure ethical AI use. Shirmohammandi himself implemented his own drone AI program designed to scare off the coyotes in his Pasadena neighborhood. However, Shirmohammadi quickly realized the ethical dilemma with his own program after discussing his work with tutors at PCC’s MESA center, coming to the discovery that his AI could possibly exacerbate the problem by making the coyotes more aggressive. Shirmohammadi had to grapple with the reality that his intentions didn’t translate to reality as he had hoped, that AI’s impact is difficult to gauge. Since then he has become increasingly passionate about the idea that, as we progress with AI, it’s important that we use it as thoughtfully and ethically as possible.

Despite his complicated experiences with AI, he is very confident that it has the potential to do great things and will continue to do great things, believing that, ultimately, the good will outweigh the bad.

“I definitely feel like AI is like that fire of Prometheus, where it really depends on the wielder to dictate its use,” said Shirmohammadi. “I think it’s really interesting how companies like OpenAI say how they want to develop more advanced AI systems and are, of course, aware of how it could be used for evil, but they claim that they want to maximize the good and minimize the bad. I don’t know exactly what they mean by that, but I personally think [… ] there are a lot of people who use it mainly for good.”

While AI may still feel like relatively new technology, AI deepfakes have been popping up since 2017, having already been tested in the app market. An app known as “DeepNude” was released in 2019, allowing users to take photos of women and replace them with new naked images. Quickly, the app blew up and became more than the creators could have anticipated or controlled, forcing them to take down the app. This kind of normalization of deepfakes of private citizens has led to the creation of the term “image-based sexual abuse” in an effort to bring attention to the specific kind of harm perpetrated by the creation of these deepfakes. In 2021, a 14-year-old UK teen took her life due to intense bullying having to do with the creation and distribution of deepfake porn made of her by male classmates.

As a communications professor at PCC, Liesel Reinhart is more than well-aware of how quickly AI is dominating the world of mass media, as well as campuses. She has seen how quickly the world has shifted around AI and feels that it’s up to the public to dictate how AI will be used in the future, whether it be for the betterment of the world around us or not.

“Sometimes it feels like we are all in a Marvel movie where this new alien technology has just dropped in and disrupted life on Earth,” said Reinhart. “AI may have tremendous power for good in the hands of heroes, but it can also be wielded as a tool of deception and even violence. Since we don’t have the Avengers in real life, it’s up to all of us to assemble and get this thing under control.”

While the future of AI is still very unclear, a glimpse of a future in which AI is properly and ethically managed. With the further implementation of laws holding image-based sexual abusers accountable in California, as well as a bill that was recently introduced into Congress that would make it a federal crime to create deepfake porn of unconsenting parties, we’re likely soon to see more justice, and safeguards put into place.

“There’s obviously going to be the good; there’s always the good, right? We have ChatGPT, which is pretty helpful, and there are some startups that are using AI to find new drugs and treatments for people, creating a healthier future for us,” said Shirmohammadi. “There’s also the bad; there’s also the irresponsible people who want to use it. I mean, it all goes back to that kind of popular quote, which is, ‘With great power comes great responsibility.’’’

--

--