Bloomberg: Identifying a Deepfake | Shamir Allibhai & Carolyne Hyde (video)

Amber
Amber Video
Published in
16 min readJan 24, 2020

Caroline Hyde: …deepfakes and, Shamir, first of all, give that context. How easy or hard was that to make?

Shamir Allibhai: So that was over a year ago and it was relatively easy back then. Now, it’s even easier. And not just that it’s easier; it’s getting cheaper and the quality is getting better.

Caroline Hyde: You were saying how your CTO just knocked one up in the airport in 15 minutes the other day.

Shamir Allibhai: Yeah.

Caroline Hyde: So tell us a little bit about Amber Video because you’ve got a really interesting background. This isn’t your bread and butter, deepfakes. You came into authenticating video for probably a more necessary day-to-day purpose, right?

Shamir Allibhai: Yeah. The origin of Amber was that a couple of events happened. One was the Black Lives Matter movement and it was obviously very sad what was happening on the ground, but if you took a step back, it was really fascinating — this movement that was formed, in part, because of bystander video evidence and could use as evidence to counter potentially systemic biases.

Shamir Allibhai: The second was — anyone Star Wars fans here? — but Rogue One, they brought back to life two characters, one was Peter Cushing. They went back to footage 22 years ago and modeled it on a human body and he looked like any other human actor on screen, but it was also pretty stark that this technology is becoming democratized. And that wasn’t a deepfake technology, but the technology overall is.

Shamir Allibhai: And the third was all this talk about fake news and we started thinking what happens when there’s fake audio? What happens when there’s fake video? Could we just like ignore the truth? Just the existence of fakes delegitimized genuine evidence, allows us to dismiss evidence, and so we didn’t want that to happen. And so we really started to think how can we authenticate video — think about like footage from security cameras or police body cameras. We didn’t want due process to be undermined and so when this footage makes its way through in prosecution, could any party, any stakeholder, have confidence in the veracity of that video, that nothing has been altered? That’s really where we started the company.

Caroline Hyde: Very kindly, since the birth of the company and agreeing to come and join us today, you helped make a deepfake of our own. I haven’t actually seen this. I want to now roll this video and then you can talk through us how you made it.

Nicole Sawyer as Caroline: And you know what else is quite lumpy? Garbage piles on New York City streets. I mean, come on, nothing makes me want to wear an “I Love New York City” T-shirt more than when I get a big whiff of rotten sewage in the summer. And speaking of myths in Brooklyn, what’s romantic about walking across the Brooklyn Bridge? It’s so crowded! You can’t even move. But the worst has got to be the Brooklyn Nets fans. I mean, come on, guys. When have they ever won a game? Losers.

Nicole Sawyer: Surprise! This is not Caroline Hyde. I’m Nicole Sawyer and this is a deepfake discussion. Joke’s on you, Caroline. I faked you! Hahahahaha!

Shamir Allibhai: Don’t blame us; Nicole’s idea.

Caroline Hyde: So I could see the face slowly morphing into my own. How do you go about doing… How quickly can this be not just those that run companies, such as yourselves, but anyone and everyone in the street being able to make such things?

Shamir Allibhai: So the first is let’s talk about there’s fake video, and we’ve seen it doesn’t have to be just deepfakes. Deepfakes is a subcategory of fake video. So when you think about the Nancy Pelosi video that was quite controversial or the Jim Acosta/White House intern video, that wasn’t a deepfake but it falls within this gamut of fake video. So a deepfake could be a fake video, synthetic audio. In this case, it used a technology called face swapping. Easily, you could download it off the Internet.

Shamir Allibhai: You can take footage of you, take footage of Nicole, put it into the system, you need maybe like 10, 15 minutes of human work right now — and this is October 2019, it’ll decrease — but you need 10, 15 minutes of human intervention and then the cloud computing runs for maybe like 12 to 24 hours. And if you see it, it’s actually really, really good. There are very little lines. What it was, was one face being swapped out for the other, making sure that the edges of the face, because both your faces, obviously, are different shapes, so it needs to blur in nicely, and it’s really simple. Anyone can download the software off the Internet, load up a cloud server, and produce this fake.

The other component to this is we’ve, obviously, always had people sowing disinformation creating propaganda. We’ve had Photoshop for a very long time, so creating fakes is not necessarily new. Obviously, creating fake audio, fake video in this really believable way that is imperceptible to human eyes and ears is new, but the other component is the ability to distribute fakes in seconds, globally, with algorithms that micro-target viewers if they’re optimized for engagement. Maybe they’re trying to incite those users and keep them on the platform longer. And so it’s not just creating, it’s also the distribution.

Caroline Hyde: Creating, distribution, you’re clearly looking at it also from criminal elements and ensuring that video is what you’re seeing when it comes to acts of crime. What do you think, therefore, is needed to tackle this growing issue? We’re confronting it every day, we’re debating it in the media and in public discourse, but do you think it’s legislation that needs to work on this, is it technology that needs to work on it, or is it the people in the audience?

Shamir Allibhai: Yeah, I think the first thing to really hammer home here is that this technology is amoral. You can use it for good purposes, like creating satire or bringing back Marilyn Monroe in the next feature film, but you can also use it for ill purposes. And so one of the areas that we’ve talked a lot about is the weaponization of this deepfake technology against women, putting their heads into pornography, and that, we really strongly feel that there is a legislative approach to this. I think, ultimately, you can’t legislate the software; you can’t legislate against this technology. It is out there. It’s getting better and people are releasing new versions and you can’t control what’s on the internet. But I do think that being able to force the Facebooks, the Twitters, the YouTubes to be able to share that knowledge, I think, is the first step.

Caroline Hyde: Are they sharing their knowledge? Because I know that they’ve been tagging videos, particularly related to child pornography.

Shamir Allibhai: Not, generally speaking, for deepfakes, no. They each have their own silos and their own guidelines as to what they do when they even identify a fake. So even forget about deepfakes for a second, the Nancy Pelosi video, each platform took their own view on what to do with it. That could include removing it altogether, downranking it in the newsfeed, showing it less often, communicating that this has been flagged as a fake. So that’s one approach.

Shamir Allibhai: Secondly, I think that this software, in the short-term/mid-term, this detection software we’ve built or people are building to identify when there is a fake, at least to know that this is a deepfake, I think, is important. Ultimately, I think that’s a losing battle. This deepfake technology is getting really, really good and, ultimately, it will circumvent the detection tools.

Caroline Hyde: Really?

Shamir Allibhai: Absolutely. The whole nature of this technology is built as an adversarial network that one is trying to create a fake, the other is trying to detect the fake. So even within itself, the infrastructure, the core component is trying to get better and improving this machine learning to improve all the time.

Caroline Hyde: So it’s a game of Whack-a-Mole, basically?

Shamir Allibhai: Absolutely. And the third, what we really believe is the more durable approach, is can we authenticate video of criticality at source? So that could include video that could become evidence, in news, in politics, this kind of video that has an evidentiary quality to it.

Caroline Hyde: How do you do that? How do you market? Is that when we go to the good old blockchain will save us all kind of technology?

Shamir Allibhai: I don’t know if blockchain will solve it, but what we do need is the number of stakeholders, like the camera manufacturers, where we’re fingerprinting or watermarking video at the source, and then to be able to track the provenance. One of the approaches is to use blockchain as a kind of database that anybody, any stakeholder, can look up and not have to trust each other. So they can look up the record, for example, in the blockchain and say, “This video was recorded at 2:02 p.m. on October 29th.” That record, because the nature of blockchains, in theory, is that it’s transparent yet immutable, and so any stakeholder. No one has to say, “Let’s just trust the police. Let’s just trust prosecution.” So all stakeholders in that process can have confidence in the veracity of the video, or if it has changed, where it has changed.

Caroline Hyde: Who calls you the most at the moment, potential clients? Journalists?

Shamir Allibhai: Journalists. Journalists call us the most, absolutely.

Caroline Hyde: Really? Asking what?

Shamir Allibhai: I think that we were behind on fake news. It’s kind of a loaded term of what is fake news; it means many things to many people. But in short, deceptive blogs with the veneer of newsworthiness or being news being shared online, let’s just call that fake news. I think that, by and large, we were behind on that.

Caroline Hyde: On the trend?

Shamir Allibhai: Yeah, and in realizing its danger and its ability to polarize democratic societies. I think it had some impact in the last presidential election. I don’t think anyone switched their votes because of fake news, but it polarized communities and societies. And so I think the realization of that impact, I think, has catalyzed people to say we need to get ahead of this problem — synthetic audio, synthetic video.

Shamir Allibhai: And we have also, oftentimes, grown up with a slogan in our head that seeing is believing and that’s quickly becoming untrue. And so I think that that’s a stark realization and then I think that journalists want to really put forth this message and so they’re looking at different scenarios.

Shamir Allibhai: So, yes, so journalists do call us a lot, but then also brands. People are concerned as to could their CEO be deepfaked? He or she could be speaking at a conference on a benign topic, someone takes that video, deepfakes that CEO so he or she is saying something racist, and that video is going viral on Twitter and no one knows whether this is a genuine video or a fake video and journalists are calling and saying, “Is this true it?” and the social media team is like, “I don’t know,” and it’s getting escalated, so I think companies are another.

Shamir Allibhai: The intel community, they’re obviously using video and wanting to know that is this a genuine video? Is this a video that a group has put out there?

Caroline Hyde: And so your technology is already able to show that or your technology is already able to watermark?

Shamir Allibhai: Both. So we have two products, one is on the authentication side, and that’s where our software runs at the origin, at the cameras, at the source, and then the other is detection. So it is looking at noise being left by some of these tools, these deepfake technology, as signs for alteration, so it’s on both sides. But the optimal is where there’s numerous stakeholders. Just when you think about the workflow of video, there’s camera manufacturers, there’s editing software, there’s broadcasters, there’s distributors, there’s the social media platforms.

Caroline Hyde: So the optimal solution is where everyone gets together and say what we really need is an approach to being able to have confidence in video and for our users to have confidence and trust in video, especially video of criticality. Again, deepfakes can be used for entertainment purposes and we’re all going to laugh and it’s gonna be funny, but there’s a kind of seriousness that I think we want to preserve truth.

Caroline Hyde: Do you think, in some way, the user has to become a little bit more cynical as well? I mean, not the user; the viewer?

Shamir Allibhai: I think it’s happening. I think when the kind of popularity of this term of fake news being thrown about, I think, has led people to say, “Hm, is this post on Facebook true or sincere?” So I think it’s happening to some extent, but yes, a sad consequence is that we will have to question, we will have to wonder. And it has real impact. I think we talk a lot about social media. I think that Facebook has definitely been under fire, some of it for genuine purposes, but I do think that a lot is not being discussed.

Shamir Allibhai: There are other scenarios that are not related to social media. For example, a candidate is applying to a job and the company wants to make an offer of hiring and they’re doing a background check and they come across a video of this potential candidate and this person, let’s say it’s a male, he’s saying something misogynistic. Is that video true or not? Just the fact that it might be true, they could pull that offer, so it’s real consequences. I do think that there’s a lot to discuss on fakes on social media, but there’s actual practical examples that I think that we haven’t thought through yet.

Caroline Hyde: Let’s open out to the audience. I’m sure there’s plenty of questions out there for you in terms of the means of these, the main way in which they’re made. Have we got anyone wanting to ask a question? I’ve still got plenty, but we’ll see whether people warm up.

Shamir Allibhai: Everyone’s worried, everyone’s is reflecting.

Caroline Hyde: They’re suddenly concerned about their own CEO and whether or not there’s… Oh yes, we have one in the middle. Have we got a microphone just able to come to the middle here? You can shout. We’ll, hopefully, get a…

Caroline Hyde: “Are there legal consequences to be put in place for people who do this?”

Shamir Allibhai: Yeah. That’s what California has recently proposed and, generally speaking, that there is a movement of, not necessarily related to deepfakes, but the weapon like revenge porn and creating some legislation around that. So that’s, for example, when ex-partners or ex-lovers or ex-boyfriends are releasing intimate footage after they’ve broken up, online, of their part ex-partner. And so there is increasing legislation in that area, which can encompass deepfakes — pornography or kind of revenge deepfakes, but by and large, no.

Shamir Allibhai: By and large, this is anonymous. When you put a video online, you’re not necessarily associated with it. You don’t know where it’s been created. You could just retweet it, you could get an email with a video in it, so you don’t really know where it’s coming from. Even if you could come up with legislation, it’s really difficult to enforce.

Caroline Hyde: Interesting. Great question. Yes? We’ve got one just down the side. The microphone’s coming your way, if you can introduce yourself and ask the question.

Tariq: Sure. My name’s Tariq. Thanks for letting us ask questions. I didn’t know we could do this here. My question — I think about this a lot — I have a four and a half-year-old daughter at home and I know a lot of people like to say, “I’m doing this for my kids and my grandkids,” but this is a concern because — I’m just curious what you think — in 10 years, let’s say, what this world actually looks like when it comes to consuming information. We will obviously be surrounded by fakes, deepfakes, we’ll obviously be surrounded by real information, presumably, there’ll be technology that starts to validate some of this. But do you picture a world where my daughter, for instance, does research, looks for a validation, or will it one day swing all the way back where maybe there’s just trust for what’s being put out there?

Shamir Allibhai: I, by and large, am not optimistic that consumers or the average user is going to validate each ad, each post, each video, each audio. It’s just much too much work. I hope that there are checks and balances before it actually gets to the viewer’s eyeballs or ears. I think where I’m seeing it is that the pendulum’s swinging back to smaller groups, smaller, more intimate groups that there is an inherent trust because you know the other people in that group, so I think that might be one of the outcomes.

Caroline Hyde: There’s a question at the back for us, oh, and at the front here. Thank you very much. Thanks, Kyle.

Male Speaker: Thanks for your talk; it’s very interesting stuff. I was curious do you think — going back to smaller groups, I agree things probably head that way — do you think that it destroys our ability to move larger societies and make sure that they’re all aligned in what we’re performing as a society, to get together and really have a democracy, or do you think that that actually enables us to have a more meaningful democracy built around smaller groups of people who have a bigger idea?

Shamir Allibhai: No, I think, by and large, it’s worse. When I look at the last, say, I look at the 20th century, the 21st century, I think one of the big difference for us is there has been a massive improvement in the quality of life, in life expectancy, decrease in infant mortality, and this has been, in our view, because of a kind of tacit agreement on the scientific method, that evidence-based conclusions should reign supreme. That’s how we’ve tackled some really, really large challenges in the last 100 years and what concerns me is if we regress to a more tribalistic state, that in a world of fakes and where we can’t have a consensus around what is fact, then we will regress to using the lens of the tribe or past experiences to decide whether this is true or false and I think it is definitely a step backwards. It is concerning to us and when people start using their own biases as a lens to view what is truth, I think we’ve seen this in the past. We’ve seen many, many situations and the kind of consistent march towards a more meritocratic society will be in jeopardy.

Caroline Hyde: To that end, we talk about democracy but you’ve lived in many countries — Canada, Middle East, Europe, America. What about those that are calling you from different nations and some that aren’t democracies and whether that really factors at all?

Shamir Allibhai: It’s not just democracies. I think many, many governments are concerned. The intel community all over are pretty concerned. They know that they’ve seen this kind of technology rise over the last X years, and so they are concerned about. Because a lot of decisions are made, so if you have video evidence, do you send — in a kind of intel context — do you send troops? What are you doing based on that evidence? But if the evidence is false or there’s doubt on the evidence, I think it jeopardizes the subsequent decisions. And so, definitely, people all over the world are concerned; it’s not just a democratic issue.

Shamir Allibhai: I think that when you look at different countries, what’s important to them, I think everyone is of the belief that video is a kind of key way to communicate. Having your people, your supporters, it’s an important way to communicate. I got a question, yesterday, from a reporter who was asking whether ISIS could create a deepfake of al-Baghdadi and to make it seem like he was actually still alive and to really undermine the US’s claim that they got him.

Caroline Hyde: And your answer?

Shamir Allibhai: Yeah. The specific question was whether they could try to fool the U.S. and I said that might not be the case in this example, because they had physical evidence. I think what they would actually create a deepfake and the purpose for them to create the deepfake would be actually as a morale boost to their supporters. That the U.S. to get their leader would be deflating, but they can show their leader purportedly still alive. And again, in a world where there’s no consensus around what is true and what is false, the supporters are naturally biased to believe that ISIS leaders are saying that their head is still alive.

Shamir Allibhai: So that’s the lens and so it can really sow distrust. I think there’s also people in the U.S. who would say, “I don’t trust this administration. Look at this deepfake. Maybe it is true, maybe they didn’t get al-Baghdadi,” in this case.

Caroline Hyde: Very poignant and in the news right now. We have a question just at the front. Thank you.

Henri Sangri: Hi, I’m Henry Sangri. In China, which is a censored world, are deepfakes a problem and what they’re doing about it?

Shamir Allibhai: A problem how so?

Henri Sangri: Are they common? […]

Shamir Allibhai: Yeah, yeah. There are more consumer apps that are creating deepfakes in a kind of humorous context, so that’s definitely increasing and it’s kind of for levity purposes. We haven’t seen a huge amount of deepfakes really to undermine the government yet. Again. this technology is still pretty novel, you’re still able, visually, to see whether this is a fake, so we haven’t seen it. We think that it will happen, it will happen, but we aren’t seeing a huge swath for, say, malicious purposes in China right now.

Caroline Hyde: Let’s end on an optimistic note.

Shamir Allibhai: Yeah.

Caroline Hyde: What are you excited about in terms of the technology you’re building and the way in which we can counteract what, seemingly, could be quite scary?

Shamir Allibhai: Yeah. I mean, what excites me is that we’re actually having this conversation, that deepfakes today are not here creating this massive damage. Again, we were behind on fake news and I’m excited that we’re actually having this discussion about synthetic audio, synthetic video, the implications to society. I think that’s really the first step.

Shamir Allibhai: I’m seeing, with the stakeholders I was mentioning in the video workflow, that people are really interested. Companies are really interested to say can we come up with a common, unified framework — how to address a deepfake, how to authenticate video, what to do when there is a fake. So that gives me optimism.

Caroline Hyde: Well, we’re excited to see how Amber Video leads that charge and, Shamir, thank you very much indeed. Please do give it up for Shamir Allibhai and for your questions.

--

--