Defeating Deepfakes: Conrad Tucker of Carnegie Mellon University How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back

Yitzi Weiner
Authority Magazine
Published in
10 min readSep 9, 2024

The world moves slowly, and in many cases, slower than we would like to see change happen. That can be quite frustrating for change agents or change makers. And so how do we reconcile how fast we want to move with how fast the system is ready and able to adapt.

Most of us are very impressed with the results produced by generative AI like ChatGPT, DALL-E and Midjourney. Their results are indeed very impressive. But all of us will be struggling with a huge problem in the near future. With the ability for AI to create convincingly real images, video, and text, how will we know what is real and what is fake, what is reality and what is not reality? See this NYT article for a recent example. This is not just a problem for the future; it is already a struggle today. Media organizations are struggling with a problem of fake people, people with AI-generated faces and AI-generated text, applying to do interviews. This problem will only get worse as AI gets more advanced. In this interview series, called “Defeating Deepfakes: How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back,” we are talking to thought leaders, business leaders, journalists, editors, and media publishers about how to identify fake text, fake images and fake video, and what all of us can do to push back against disinformation spread by deepfakes. As a part of this series, we had the distinct pleasure of interviewing Conrad Tucker.

Conrad Tucker is director of Carnegie Mellon University Africa, the CMU College of Engineering location in Rwanda. Tucker is also associate dean for international affairs-Africa and professor of mechanical engineering at Carnegie Mellon. He holds courtesy faculty appointments in machine learning, robotics, and biomedical engineering and is a commissioner for the U.S. Chamber of Commerce Artificial Intelligence Commission on Competitiveness, Inclusion, and Innovation.

Thank you so much for joining us. Before we dive in, our readers would love to “get to know you” a bit better. Can you share with us the “backstory” about how you got started in your career?

I became interested in engineering because, at a very early age in my academic career, I was really drawn to problem-solving and things that could be quantified. As much as I love and appreciate the arts, numbers just made more sense to me. That technical foundation led me to the engineering field — and specifically mechanical engineering — because of the breadth and generalizability of how it could be applied to solving real problems.

Can you share the most interesting story that occurred to you in the course of your career?

Shortly after I joined the faculty at Carnegie Mellon University, I had the opportunity to organize and participate in the Workshop on Artificial Intelligence and the Future of STEM and Societies. I was excited about the event, which welcomed engineering educators, AI experts, and policymakers to discuss how AI can transform STEM education and workforce development. What I didn’t know at the time was that this workshop would be the catalyst that led to my work in the deepfake space. At the time, a handful of workshop participants and I were discussing how generative AI could potentially democratize STEM education by providing customized learning content. As engineers typically do, we also brought up how the things we create could potentially be adversely used. For example, how this same generative AI technology could be used to generate misinformation and its potential impact on STEM learners who may become over reliant on them. Our deepfake ideas expanded from there.

In 2019, deepfakes were in the very nascent stages and not much research was being done on how they would be used or what type of regulations were needed. The colleagues that I met at the workshop ended up serving as my collaborators on National Science Foundation-funded research that looks at the effect of deepfake technology on education.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

One mistake that I made early in my career as a faculty member was not understanding the impact and value of mentors and coaches. When I started in academia as a professor, I had the same mindset that I did when I was a graduate student: do research, write papers, and spend time in the classroom. I didn’t realize the significant importance of grant writing in a faculty member’s career.

So, almost a year had gone by, and I was writing papers and teaching. I thought, “I’m doing a great job as a professor!” Then one day, I was serendipitously having coffee with one of my colleagues. They posed a question to me, saying “How many proposals have you submitted?” And I said, “Proposals with an ‘s’?”

Then I realized how completely far off I was in a very critical component of the job, which was securing funding to support research. It was good feedback. This is why it’s important to always form either formal or informal professional networks.

What are some of the most interesting or exciting projects you are working on now?

In the Artificial Intelligence in Products Engineered for X (AiPEX) Lab, we explore the use of machine learning methods that predictively improve the outcome of product design solutions through the acquisition, fusion and mining of large-scale, publicly-available data. Some interesting projects going on in the lab right now are focused on developing AI robotics models that have a better understanding of the kind of governing physics of the world. As engineers, we know that a deep understanding of the way our environments work help us design engineering systems. Enabling AI to replicate or understand the physical world is a very, very challenging problem. The way this relates to deepfakes is that in order to make advancements,, we need to be mindful of the ability for AI to also replicate or potentially fool these AI models into learning a completely different understanding of the world.

I’m also very fortunate to be at an institution that has a global mindset when it comes to engineering research and education. Carnegie Mellon University Africa is an important part of the mission of the College of Engineering, which is to build and sustain a global community of engineering leaders to achieve real and enduring good. Personally, being at CMU-Africa has inspired new types of research projects for me and allowed me to work with students from different backgrounds who have the passion, knowledge, and skills to transform the continent.

For the benefit of our readers, can you share why you are an authority about the topic of Deepfakes?

As part of the global research community, my colleagues and I have our work vetted through publications, grants, and partnerships. This enables us to speak in a position of authority on these topics. In fact, academics (and academic institutions) are in a position to lead the global conversation of what needs to be regulated when it comes to deepfakes because we are independent, neutral parties.

Ok, thank you for that. Let’s now shift to the main parts of our interview. Let’s start with a basic set of definitions so that we are all on the same page. Can you help define what a “Deepfake” is? How is it different than a parody or satire?

So, let’s start with the first part of the word, “deep”. This references deep neural networks and the architecture of the AI models. The ability of deepfakes is a class of AI-generated content that is convincing enough to a human observer that they believe it to be real.

Can you help articulate to our readers why Deepfakes should be a serious concern right now, and why we should take measures to identify them?

The ability to generate content that is human-like or human-curated has both pros and cons. On one hand, yes, deepfakes could be used for manipulation in elections and in other adversarial ways. But on the other hand, deepfakes could be used to generate education content that makes high-quality education accessible to communities that may not have had access to this material.

The architecture for both uses is similar. What the architecture is used for — I think that’s where we as a society have to come up with guidelines, policies, and consequences for when these tools are used in ways that may be nefarious or harmful to humanity.

Why would a person go to such lengths to create a deepfake? How exactly can malicious actors benefit from making them?

We’re getting to this space where human beings can form emotional connections to AI models. This would have been maybe unthinkable several decades ago.

Imagine an AI-generated voice that is soothing, trustworthy. In some cases, the creator of this AI model may not originally have nefarious intent, but just the fact that humans can form emotional bonds to AI models provides a lot of leverage for the creators of these AI models. This opens the door to manipulation; now you have the ability to influence behavior.

Can you please share with our readers a few ways to identify fake images? Similarly, can you please share with our readers a few ways to identify fake audio? Next, can you please share with our readers a few ways to identify fake text? Finally, can you please share with our readers a few ways to identify fake video?

We’re past the point of relying on the average user to decipher between authentic or AI-generated content, for photo, video, and audio. The fidelity of deepfakes make it impractical for an individual to identify the difference. However, video is a more difficult task for an AI model to achieve because human beings are very intuitive in identifying very slight errors in the passage of time.

How can the public neutralize the threat posed by deepfakes? Is there anything we can do to push back?

A good analogue to deepfakes is counterfeit currency. We’re at the stage where it’s impractical for an average user to determine a fake hundred-dollar bill. So, what can people do to make sure they are using real money? Make sure they are getting their money from a trusted source, like a bank.

In the same way, it is impractical to expect the average user to determine if an image, video, or audio is real or fake. Policies need to be established to protect users so that they can be confident that the content they are consuming can be trusted. Just like with the example of counterfeit currency, users should always go back to the source of information and think critically about whether they may be viewing misleading or false information through a deepfake.

This is the signature question we ask in most of our interviews. Can you share your “5 Things I Wish Someone Told Me When I First Started” and why? Please share a story or an example for each.

1. One thing I wish someone told me is how much theory and practice differ. We can come up with all the algorithms we want when it comes to something like human behavior. But day-to-day interacting with people is a whole lot more nuanced than that.

2. Life is a contact sport, and I think it inspires us to get out of our labs and engage with people in the communities that we are seeking to positively impact.

3. There’s a world that we aspire for and then there’s the world that there is. I think that being able to understand and accept the constraints that we have is a good starting point for what we spend our time doing.

4. Quality education comes at a cost. The question is, “who pays for it?” If a society understands and appreciates the long-term positive impacts of education, then society itself may see it as a worthwhile investment. An educated population is a population that may have the strong critical skills to be less susceptible to deepfakes. So in the end, quality education is a net positive across so many societal dimensions.

5. The world moves slowly, and in many cases, slower than we would like to see change happen. That can be quite frustrating for change agents or change makers. And so how do we reconcile how fast we want to move with how fast the system is ready and able to adapt?

You are a person of enormous influence. If you could start a movement that would bring the most amount of good to the greatest amount of people, what would that be? You never know what your idea can trigger. :-)

The movement I would start is to change the global incentive system such that we focus less on the net worth of individuals or countries and more on net impact of individuals and countries. In this alternate universe, famous people aren’t necessarily movie stars, athletes, or billionaires. Instead, they are the people who positively impact the most lives. How wonderful would it be if impact became the currency that we trade and assign value to.

How can our readers further follow your work online?

Please visit my lab website or follow me on LinkedIn to learn more about me and my research. I also encourage you to read about how Carnegie Mellon University Africa is educating the next generation of African tech leaders and innovators.

Thank you so much for the time you spent on this. We greatly appreciate it and wish you continued success!

--

--

Authority Magazine
Authority Magazine

Published in Authority Magazine

In-depth Interviews with Authorities in Business, Pop Culture, Wellness, Social Impact, and Tech. We use interviews to draw out stories that are both empowering and actionable.

Yitzi Weiner
Yitzi Weiner

Written by Yitzi Weiner

A “Positive” Influencer, Founder & Editor of Authority Magazine, CEO of Thought Leader Incubator