What is social engineering?
Social engineering (SE) is a method of compromising systems, often used by a malicious actor to wrongfully obtain another person’s data, and use it in some way that involves fraud or deception, typically for economic gains. Over the years SE has become one of the more prevalent means of compromising systems and is one of the largest contributors to global cyber crime.
According to some reports, U.S banks lost ~$1.6 billion due to “social engineering wire transfers” between 2013 and 2016¹.
Social engineering attacks may result in business email compromise, SIM swapping, impersonation over the phone, opening financial accounts using another person’s data, and others. SE attacks are among the most common cyber attacks, and things are about to get a lot worse.
As technology has evolved, real time attacks with better automation have become readily available, making SE a progressively more dangerous threat. The latest such advancement is deepfake.
Wikipedia describes “deepfakes” as:
… synthetic media in which a person in an existing image or video is replaced with someone else’s likeness using artificial neural networks. They often combine and superimpose existing media onto source media using machine learning techniques known as autoencoders and generative adversarial networks
As of this writing, deepfake is not widely used in SE attacks. The tech has not become readily available and user-friendly enough. As such, we are still relatively safe from broad-scale attacks where a malicious actor leverages deepfake technology to impersonate someone in real time. At the rate things are going, this won’t be the case for long.
To illustrate how fast deepfake technology has advanced, in 2016 one needed 20 minutes of recordings to synthesize someone’s voice. By 2018, that number was reduced to 5 seconds². Lyrebird is an example of such software, and the industry around speech and video synthesis is evolving extremely quickly.
A computer science professor from the University of Southern California believes that we may develop “perfect” deepfake by the end of 2020³.
There have already been reported cases of financial fraud which used deepfake to execute a SE attack⁴:
Criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking.
If you don’t think that’s bad enough, consider the fact that deepfake is already being used to manipulate voters. Deepfake technology is a serious security threat, and it’s easy to imagine how a given person could leverage it to advance their agenda. The examples outlined here are just the beginning of what’s to come.
Current state of identity verification
The problem with the impersonation element of SE stems from the fact that the methods for verifying individuals’ identities are not robust enough.
Currently, there are many ways services can try to verify your identity:
- a static 4 digit PIN
- personal information in the form of security questions
- details about products you have purchased (eg. serial number)
- 2FA token code which is sent via SMS or email;
- TOTP (authenticator tokens)
The problem with using static passwords, personal information or even information about what products someone uses is that for a large segment of the population, they are relatively easy to obtain or guess. To make matters worse, they rarely have an expiration date.
Out of all of the above options, TOTP is almost certainly the most robust and secure option. Using biometrics as a form of authentication is under threat by deepfake technology as it becomes easier to replicate voice or physical attributes.
Like most things in security there isn’t a silver bullet which addresses all concerns, but there are certainly better ways to do identity verification between individuals than the ones we are using now. One approach that works well is using 2FA tokens (TOTP) for human-to-human verification over any communication channel. This already works well for proving a user’s identity to a machine, so why not take what works and apply it to humans.
VeriPal released an iOS app (Android coming soon) which lets you generate your own 2FA secrets so you can verify individuals’ identities, and is working on solutions to help businesses fight SE attacks such as SIM swapping and prepare for the oncoming wave of sophisticated deepfake SE attacks.
You can learn more about VeriPal and how human-to-human 2FA works here.