Cheap Fakes beat Deep Fakes
Deepfakes are risky for information warfare, but exciting new capabilities for security
I’ve always been sceptical of deepfakes. What are they good for? I’ve never understood the excitement over the perceived utility of deep fakes for disinformation in information warfare. Information warfare does not need deepfakes, cheapfakes are more than enough. Finally, someone has found a use for deepfakes as offensive cyber tools, so let’s deep dive deep fakes!
That time I totally met a famous person
The Deepfake Hype Train: Disinformation
Usually the threat of deepfakes is presented as an information warfare challenge, one that will make disinformation much harder to combat. This assessment is on display everywhere. Here is one recent example: Congress grapples with how to regulate deepfakes.
The threat of deepfakes for information operations is minimal. Generally speaking people do not engage in rhetoric to change their own ideas, but to build coalitions with other people. When presenting data to support their arguments they are looking only for examples that appear to discredit counter arguments, or that seem to support their own. There is no reason why deepfakes would be more effective at this than existing cheapfake techniques, such as manipulation of audiovisual media, miscontextualized and decontextualized audiovisual media, etc.
Deepfakes are a threat to information warfare because they can be easily exposed as false, a lie. One of the fundamental rules of information warfare is that you never lie (except when necessary.) Deepfakes are detectable as artificial content, which reveals the lie. This discredits the source of the information and the rest of their argument. For an information warfare campaign, using deepfakes is a high risk proposition.
I have always been skeptical of the utility of deepfakes for disinformation. They are simply unnecessary. They provide nothing that any threat actor cannot do already with normal media alteration. The history and implications of altered, decontextualised or miscontextualised media has been studied and excellent books are available on the subject. Propaganda and Information Warfare in the Twenty First Century, for example, is highly recommended.
Deepfake as disinformation super weapon skepticism is finally going mainstream. From articles inspired by my public statements, to third party research, the perception of deepfakes as disinformation tools is gaining publicity.
The real deepfake threats
Individuals are at high risk of deepfake attacks because they are more vulnerable to targeted attacks. Alex Stamos, another early skeptic of deepfakes, has an excellent Twitter thread on this here.
Exploiting poor business security procedures and processes. Security is more than just technological access controls, it is also processes to be followed, and addressing the human factor. Without all of these in place and working in cohesion, the true security posture is weak.
Companies (and people at high risk) must have security procedures that mitigate insider threats. For example access control process that prohibit any single person from arbitrary movement of money. Without such procedures in place they are vulnerable to exploitation via insider threat.
High risk individuals frequently turn to technological solutions to mitigate threats such as impersonation (e.g. multi factor authentication.) They should implement procedures to prevent being recruited as unwitting agents (e.g. vetting anyone who approaches them). This subject is too rich to explore in this post.
Insider threats come in many forms, here is a partial list:
- Unwitting agents recruited by external malicious threat actors. Email is a frequent vector used for recruitment, sometimes as various forms of phishing, other times as simply a communications channel.
- Impersonation often done via email, such as many 414 scams against political campaigns or other companies. Frequently there are news reports of companies transferring money by ‘paying false invoices, or redirecting the payment account for real invoices. Famously, HB Gary was compromised by an impersonation attack which gave the threat actors a highly privileged account password.
- Malicious agents such as disgruntled employees, or employees recruited by threat actors, and even malicious threat actors gaining access via employment. Examples are too numerous to mention.
- Human error. Humans make mistakes, this is life.
There are many ways and means that a company is exposed to insider threat attacks, but most of the discussion focusses on deliberately malicious threat actors. There is little that talks about impersonation or unwitting agents.
Exploiting poor access controls against insider threats
Deepfakes bypass certain types of controls against impersonation, such as confirmation via voice authentication. If the only protection against transferring a lot of money from company accounts to an arbitrary account is someone “recognising the voice of an executive over a phone call”, that company is at vulnerable to insider threats.
This is one attack where deepfakes have great utility is exploiting bad access controls, such as authorisation by voice recognition. An attack now being used in the wild.
Deepfakes are useful exploits, for some usecases
As tools of information warfare deepfakes are severely flawed. They are unnecessary to support a threat actor’s propaganda attack, and they present a hazard that could jeopardise the threat actor’s channel and messaging.
Individuals and companies are at much greater risk for deepfake based attacks. This is something that will need to be addressed. From a security point of view, the solution is generally to implement procedures that mitigate the deepfake capability. For individuals facing persecution, I have no answers. Internet persecution is an unsolved problem.