The Future Is Now: AI can be used to create extremely realistic forgeries, we have to learn to counteract this

With Justin Sherman and Fotis Georgiadis

Fotis Georgiadis
Authority Magazine
12 min readNov 22, 2018

--

Educators need to teach students about technology, and they need to teach students how to critically use that technology to learn about the world — for instance, how to expertly use a search engine or critically consume information. Whether it’s social media impacting activism, or the Internet of Things impacting national security, or artificial intelligence impacting medicine, digital technology is moving everywhere and changing everything. To be more engaged thinkers, consumers, and citizens, students need to learn about how this technology works and how it’s being used. American policymakers need to similarly embrace this reality. For instance, the US has no broad, overarching national AI strategy — unlike countries like France or China. Many of our Congressional representatives remain astoundingly uninformed on issues like encryption, and many of our courts’ judges suffer from a similar lack of understanding of digital technology. Policymakers need to be intelligently and actively engaged with contemporary issues brought on by the likes of deepfakes if they are to effectively lead in the 21st century.

As part of my series about “Bleeding edge” technological breakthroughs that seem copied from science fiction, I had the pleasure of interviewing Justin Sherman. Justin is a junior at Duke University double-majoring in computer science and political science. He is a Fellow at Interact; a Fellow at the Duke Center on Law & Technology; and a Cybersecurity Policy Fellow at New America. At Duke University, he is the Co-Founder and President of Duke’s Cyber Club and Cyber Team, and is the Co-Founder and Vice President of Ethical Tech.

Thank you so much for doing this with us! Can you tell us a story about what brought you to this specific career path?

When I was in middle school, I started teaching myself computer programming. This happened to coincide with my first civics class, through which I became intensely interested in law, politics, and issues such as search and seizure. For the next several years, these interests existed independently of one another (I was on a robotics team and ran my high school’s mock trial team, for instance) until I read Kim Zetter’s book “Countdown to Zero Day,” which profiled the Stuxnet cyber weapon used to sabotage the Iranian nuclear program. That was the moment when I realized that cybersecurity — and tech policy more broadly — was the perfect intersection of my interests. Her work led me to other books including Bruce Schneier’s “Data and Goliath,” and I’ve pursued work in this area ever since.

Can you share the most interesting story that happened to you since you began your career?

In my area of interest, the most interesting stories I’ve experienced are ones that I can’t repeat! There are a few cool stories I can pick from, though:

During my freshman year at Duke, I attended a small dinner event with Jared Cohen, President of Jigsaw at Alphabet, as part of Duke’s Program in American Grand Strategy. I remember being very excited for this dinner, and I couldn’t wait to meet him. He led a fantastic discussion during the dinner, telling a number of fascinating stories in which he touched upon everything from cybersecurity to online privacy to counter-extremism. The tales of his overseas trips to dangerous locales totally piqued my interest. When the dinner ended, all I can remember thinking is I have to talk to him! I was incredibly nervous — and couldn’t nearly decide what I should ask him about — but I approached him afterwards and we had a conversation about vulnerabilities in AI systems, and how designers can ethically account for the dual-use nature of the technology before it’s released on the world. To this day, I’m pretty sure that dinner accelerated my interest in international relations.

Can you tell us about the “Bleeding edge” technological breakthroughs that you are working on? How do you think that will help people?

Along with Technology for Global Security, a West Coast-based think tank, I’ve been conducting research into deepfake technology — which is essentially machine learning that can make fake but scarily realistic photo, video, and audio. Jordan Peele, director of Get Out, helped make a fake video of President Obama that went viral earlier this year using these exact capabilities. For the most part, it looks and sounds like President Obama is the one speaking, but then he starts to say some strange things. It’s hard for people to detect the fraudulence, especially if the content of the speech itself isn’t suspicious. Unfortunately, deepfake technology will likely do much more harm than good in society.

There is already technology that can produce audio of you talking — incredibly realistic audio — with just sixty seconds of recorded speech. Other techniques, like the one Peele used, make it quite easy to produce fraudulent videos. Many research labs are developing ways to detect if this stuff is real, but deepfakes are still going to have scary impacts on the world.

How do you think this might change the world?

The technology enabling deepfakes is yet another tool that enables actors to wreak havoc on the world stage with an increasingly lower barrier to entry.

Keeping “Black Mirror” in mind can you see any potential drawbacks about this technology that people should think more deeply about?

Fake news is already a problem. Recently, we saw this particularly illuminated by the White House press secretary’s usage of a fake video to justify banning a reporter. But deepfakes are going to make the issue much, much worse — because even critical consumers of information may not be able to tell if something is real. For many reasons, this could spell disaster.

World leaders have themselves demonstrated how Twitter and other social media platforms can dangerously propagate fake news with potentially devastating consequences. (In December 2016, a fake article claimed a former Israeli defense minister had threatened nuclear attack on Pakistan if it sent troops to Syria. Pakistan’s Defense Minister, not yet realizing this was fake, tweeted “Israel forgets Pakistan is a nuclear state too.”) Imagine, then, that it’s not just a sketchy news article or a poorly Photoshopped image that makes the rounds online — instead, it is a highly realistic (but fake) video of a world leader declaring war, which is then covered by a news outlet looking to “break the story” first, setting off a cascade of coverage. This is just a taste of some of the global security implications of this technology, not to mention what impact it could have on public discourse and the free press.

Was there a “tipping point” that led you to this research? Can you tell us that story?

I started reading about deepfakes a short time before Jordan Peele collaborated to make the fake video of President Obama. While many of the early applications were for creating fake porn videos of celebrities (yet another alarming use case), I had been reading a lot about AI and about social media, and I knew the implications of this deepfake tech would be a lot bigger.

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?

My high school math and computer science teacher, Dr. Mark Jones, who has had a decorated career in machine learning and STEM education. He was an important mentor at a time in my life when I was just beginning to explore this area of interest; from his hands-on experience at Bell Labs to his years as a college professor to his many cool patents, he has a breadth and depth of knowledge that he was always more than willing to share. What started with a few questions after class — when I would ask him little things about a Java project I was working on, or a WIRED article I had just read — quickly turned into hour-long conversations during lunch about everything from the construction of deep learning models to existential threats posed by artificial intelligence and mass data collection. My junior year, we even conducted a research project together, exploring how low-cost, easy-coding robotics kits could be repurposed for advanced computer science classes, which culminated in a demo to other students and faculty. To this day, Dr. Jones remains an important mentor in my life; it’s almost an inside joke between us that “a quick lunch to catch up” is code for a conversation that will actually last for hours. Without his insight, guidance, and support as I began this journey, I doubt I would be where I am today.

How have you used your success to bring goodness to the world?

At Duke University, I co-founded Ethical Tech, which is focused on fostering discussion and exchange of ideas regarding ethical technology across all members of society — in other words, empowering ALL people to have a voice in technology innovation, consumption, and regulation. In a world where conversations and decisions about technology are remarkably homogeneous and exclusive, I fear that if current practices go unchecked, we risk further concentrating power and exacerbating existing inequalities. Our group has worked on everything from developing an open source “Ethical Tech 101” college class curriculum to building training modules for judges to learn about prejudiced recidivism predictors used in state courts around the US to inform bail and sentencing. I’m excited for how the team will continue to grow and impact the world for the better.

What are your “5 Things I Wish Someone Told Me Before I Started” and why. (Please share a story or example for each.)

  1. Value mentorship. I would not be where I am today without the support of an amazing group of mentors, who have provided everything from personal guidance to job advice to subject matter expertise on a variety of issues. Seek out those who can lift you up, and recognize the importance of mentorship when you have the opportunity to provide the same to others.
  2. Take a risk/build a network. You never know what will happen if you reach out to someone, and as generic as this sounds, it’s something I wished I had understood earlier. I’ve had publishing opportunities because I sent a passionate cold-email; I’ve also discovered research opportunities, job prospects, and spoken to some prolific individuals all simply by making the effort to communicate with people in my area of interest. Many times it will not lead to anything. But some people are more than willing to talk to or help someone with the same interests. Take the risk!
  3. If they don’t teach it, go out and learn it. This applies to many people in many different places. In my case, I was largely or entirely unable to learn about cybersecurity and tech policy from the coursework offered in my high school and currently at my university. But that just meant I had to be creative. I read a ton of books; I took a lot of online classes; and I turned to the Internet to seek out those who could better teach me about these subjects. Not everyone has the privilege of being able to do this, which is all the more reason to self-educate if you are able.
  4. Actively seek an intersectional and interdisciplinary lens. I have a lot of stories on this, but to give a tech-related example: I’ve taken classes at Duke that covered topics like design thinking, cognitive biases, and ethnography — all aimed at better user experiences, tailored to our own predispositions and decision-making processes — but when I read a lot of the existing literature on cybersecurity, much of it operated without consideration for many of these subjects. Today, there is a lot more literature on human-centered cybersecurity and human-centered cybersecurity education than even from just a few years ago, but many people had never considered this interdisciplinary angle until it was presented to them.
  5. Actively seek out opposing views. This is essential in a polarized political climate, but it’s important, more broadly, for any decision you make. As I was going through the process of standing up Ethical Tech with my co-founder Cassi Carley, she and I had many conversations that we thought covered every base but ended up being constrained by our own preconceived ideas; only by speaking to a third-party advisor were we able to get clarity and realize our blindness to another possible course of action. Now, I try much harder to seek out conflicting viewpoints that, if not only interesting to hear in and of themselves, can also provide important perspective that influences my own decision-making.

You are a person of great influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)

Educators need to teach students about technology, and they need to teach students how to critically use that technology to learn about the world — for instance, how to expertly use a search engine or critically consume information. Whether it’s social media impacting activism, or the Internet of Things impacting national security, or artificial intelligence impacting medicine, digital technology is moving everywhere and changing everything. To be more engaged thinkers, consumers, and citizens, students need to learn about how this technology works and how it’s being used.

American policymakers need to similarly embrace this reality. For instance, the US has no broad, overarching national AI strategy — unlike countries like France or China. Many of our Congressional representatives remain astoundingly uninformed on issues like encryption, and many of our courts’ judges suffer from a similar lack of understanding of digital technology. Policymakers need to be intelligently and actively engaged with contemporary issues brought on by the likes of deepfakes if they are to effectively lead in the 21st century.

Can you please give us your favorite “Life Lesson Quote”? Can you share how that was relevant to you in your life?

Stephen Jay Gould has a line from his book The Panda’s Thumb that reads, “I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.” This line has stuck with me ever since I heard it. To me, it serves as a reminder to not just appreciate and acknowledge the privilege and opportunities we may have been given, but to also ensure that those same opportunities can be made available to as many other people as possible. Like Gould says, if we fully harnessed all the talent and capability on the planet, who knows what humanity could accomplish.

Some very well known VCs read this column. If you had 60 seconds to make a pitch to a VC, what would you say? He or she might just see this if we tag them :-)

I would say the speed and scale of technology’s development makes questions of technology ethics some of the most important of our time — for if we don’t educate, think, and act quickly on issues from data privacy to cybersecurity to algorithmic bias to global Internet access, there could be seriously problematic effects. Ethical Tech is working to address ethics in technology with the hope that better technology education, heightened inclusivity, and democratized decision-making can help leverage technology for the better while also avoiding or mitigating potentially harmful outcomes. From research to writing, to speaking and collaborating with important decision-makers, we are working hard to empower “ethical tech.”

Our first project, for example, is centered around the legal justice system’s use of algorithms. Many courts use so-called Risk Assessment Algorithms (RAAs) for a combination of pre-trial and bail hearings, sentencing and parole, and even juvenile justice. However, most judges have not had formal training in understanding how algorithms work, or may lack the technical understanding necessary to properly vet new algorithms. In collaboration with the Duke Center on Law & Technology, Ethical Tech has created an initial set of tools to clearly explain the general end-to-end development process of Risk Assessment Algorithms. Our goal is to provide a toolkit with common language that enables judges to understand the social and legal implications of different algorithmic design decisions that impact tools like RAAs and thus impact the justice process.

The bigger picture is that similarly biased or otherwise flawed systems are used everywhere from medicine (e.g., skin cancer detection) to facial recognition (e.g., surveillance systems) to, increasingly, cars and weapons systems. Human lives are being impacted — prejudiced recidivism predictors might amplify disparities in prison sentencing, or biased skin cancer predictors might not detect cancer in certain races — so enabling everyone to learn about and influence such technologies is absolutely essential.

How can our readers follow you on social media?

Managing the sheer volume of information in my Twitter feed was a part-time job, so that was short-lived. For now, you can follow me on LinkedIn.

Thank you so much for joining us. This was very inspirational.

--

--

Fotis Georgiadis
Authority Magazine

Passionate about bringing emerging technologies to the market