Updated September 13, 2019: Der Speigel’s reporter confirms that the insurance company had “zero evidence for their claims”, thus confirming the thesis of this blog
Recently, the Wall Street Journal reported that “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case” (paywall). The Washington Post then followed-up on the story and released some more details. The story has gained a lot of momentum, and was quoted by many other publications, crowning it as the “first deepfake AI heist”. However, I feel the story is far from being convincing.
The story in a nutshell
“a UK energy company’s chief executive was tricked into wiring €200,000 (or about $220,000 USD) to a Hungarian supplier because he believed his boss was instructing him to do so.”
Can it be done by “deepfake AI”?
Yes! Or maybe. Or in the near future. But this is not the question we should ask. The real question is what actually happened. Was deepfake AI actually used in this scam or not?
Many of the stories have focused on the maturity of the technology, and on whether deepfake AI technology has reached the level at which it allows the scammer to create a high fidelity (or at least good enough) voice impersonation of a CEO in realtime, based on a limited training set.
Personally, I believe that in practice, pulling off a good impersonation attack based on some very short recordings, is harder than some publications present it. But even if it is possible, it does not mean that it is indeed what happened. At most, it means that “deepfake AI” can be used. But when we read a news story, we expect it to describe not what could have happened, but what actually happened.
How to objectively identify a deepfake AI scam
There are only two ways to objectively identify this attack as a deepfake AI scam:
- Catch the fraudsters and learn about their modus operandi. That did not happen, however, as the report mentions “Investigators haven’t identified any suspects”.
- Have some recordings of the scam calls and analyze them to find some artefacts specifically related to deepfake AI tools. The report, nonetheless, explicitly determines that “the call was not recorded”.
This leaves us with only the subjective testimony of the victim himself saying he “recognized his boss’ slight German accent and the melody of his voice”. As the WaPo story rightfully mentions, “When you create a stressful situation like this for the victim, their ability to question themselves for a second … goes away”. It’s certainly not an ideal setting for an accurate recollection.
Even if we accept this evidence as valid, it does not tell us anything about the technique used by the attackers. It might as well be a human impressionist mimicking the voice of the CEO.
Follow the money
Interestingly, and perhaps counterintuitively, it seems like all parties involved in this story are biased towards preferring the deepfake AI narrative:
- The victims (both the UK CEO and his company): Falling for a regular social engineering attack is embarrassing. Falling for a “deepfake AI scam” sounds like a “force majeure” and helps them save face.
- The media: regular social engineering attack is boring. Deep fake AI? exciting!
- The insurance company: They are the big winners! They get free, very positive PR for the firm itself in major publications. They are the good guys that paid in full and saved the day. Furthermore, they are promoting their offering. A CEO reading the story would certainly consider the option to buy such insurance against such a “force majeure” attack.
Another major reason to be suspicious in this case is the fact that the story was reported by the insurance company, and no external source (e.g. law enforcement) has confirmed the stated facts.
Judging by the evidence presented in the reports, the whole story is based on the subjective description provided by the victim, who was under severe stress. The story itself is exclusively reported by the insurance company, which has a clear interest in the deepfake AI scam framing.
It might have really been a deepfake AI attack, but the doubts I cast here have not only been left unresolved but also have never been acknowledged or otherwise clearly reflected in the news reports.
Why does it matter?
Because now more than ever, truth matters. Because we need to know what actually happened and not what could have happened. And when important publications such as WSJ and WaPo present this story as a fact, it becomes a fact, even if it’s not. Our resources, such as attention and budget, are limited. If we invest them in things that are not happening, we will miss them for the things that actually do.
This is especially true for controversial topics, such as AI and its growing impact on our life. It’s important that the discussion that may shape our future will be based on established facts and not on biased narratives.
As an anecdote, many of the reports cross-reference to an earlier report by Israel National Cyber Directorate (INCD) about such deepfake AI scams. I have read the original report (in Hebrew) and it clearly states that the INCD had not witnessed any such attacks as of yet.
I hope that important publications such as the WSJ and WaPo will either amend their stories to provide more evidence in support of the deepfake AI theory or amend the stories to reflect the uncertainty of this narrative.