Face spoofing trickery raising the bar for cybersecurity in 2020
By 2027, value of the online identity verification market is expected to grow to $4.4 billion, and economies may gain up to 13% in economic value addition by implementing it in 2030. At the same time, the safety and legislation in using artificial intelligence (AI) for digital authentication is still under debate, with serious ramifications for cybersecurity.
Technologies that make life easier also entail inevitable security risks with the potential to exploit sensitive data stored and sent through digital channels.
Cybercrimes currently cost $2.9 million every minute on the internet, with businesses paying $25 per internet minute due to security breaches. Digital ID theft through facial spoofing and document fraud at the time of verification are incumbent threats that manage to compromise sensitive customer data.
AI might not have reached the point yet where companies could rely 100% on it for fraud prevention. Perhaps, that’s why, Shufti Pro harnessed human expertise or Human Intelligence (HI) to supervise AI.
Shufti Pro caught 42 spoof attacks in 2019 amidst rising cyber threats to financial institutions and other leading industries.
(Infographic included at the end)
Hackers have found ways to infiltrate critical user databases and conceal both their identity and location. The popular CEO fraud (also called email spoofing) and the attacks on high profile companies have cost millions of dollars in losses in recent years. Using IP spoofing and distributed denial of service (DDoS) attacks, fraudsters can send or request data without being tracked easily. Other threats include caller ID spoofing, text message spoofing, GPS spoofing, and man-in-the-middle attack.
However, digital frauds don’t always need to be technically advanced to infiltrate online accounts. Simple attacks to spoof facial recognition technology have been deployed by data hackers to gain unauthorised access to digital accounts. These include uploading screenshots of faces and identity documents, or submitting tampered or photoshopped images for access to digital financial accounts as well-known practices among attackers.
According to the NIST report, facial recognition software have improved in accuracy to match a photograph from a database by 20 times. But as it turns out, there is still a way to trick the face-based biometric matching algorithms. Facial replacement technology, de-identification systems and deepfakes are all threats to the rapidly widening landscape of identity services.
Here are a few threats you should know
A facial spoofing attack involves tricking facial recognition software into improperly recognising an intruder as an authorised user. These presentation attacks are an open security issue and more susceptible to crime than any other biometric trait due to the ease with which faces can be accessed and reproduced.
By far, this is one of the most critical cyberthreats to digital users worldwide given the pervasive use of facial screening software across all industries. But what makes it so vulnerable to security breaches? The answer is in the technology itself.
Deep learning techniques such as reinforcement learning teach algorithms through repetitive patterns in large databases. The resulting identification process therefore depends largely on unique data points that are picked up by the algorithm, but which can also be duped into making the wrong decision.
How spoof attackers trick verification software
Here’s how spoof attackers trick verification software:
- 3D Mask Attack
Using 3D masks, imposters try to mimic facial movements to trick depth sensors as part of advanced face spoofing attempts. These are either life-size or miniature masks in paper or silicone that can be 2D, 2.5D or 3D.
2. Eye-cut Photo Attack
To appear as blink behaviour, the trickster removes one of the eyes from the photo and displays it in front of the camera.
3. Distorted Photo Attack
A printed photo is bent or moved in different directions to simulate facial movements.
4. Print Attack
Pictures (or screenshots of photos) are taken from the internet without users’ consent and is printed or displayed on a device to replace a real-life face.
5. Video Attack
A looped video of a face is used to simulate life-like facial movements and behaviour on a large screen. It looks more natural and provides data points that are much closer to a real face.
Document fraud has also emerged as a popular means of breaking into personal digital accounts. Trends in cybercrime reveal the use of fake documents at login (such as internet downloaded IDs, unmatched MRZ codes, and expired documents). Cyber attackers have crept into the electronic world of finances simply by attempting to clone documents or mislead verification software to bypass requirements.
List of frauds detected by Shufti Pro
Here’s a list of frauds detected in the form of invalid documents submitted to mislead the software into validating them as genuine proof:
- Expired documents
Users submit old documents available in hard copy or online. If undetected, this duplication results in unauthorised access control.
2. Forged/photoshopped document
Documents are purposely forged or modified to change name/codes/numbers in favour of other users. Stolen, internet downloaded or photoshopped documents are also used for the same purpose.
3. Document mismatch
Users are required to submit more than one document to authorise identity and access. For instance, passport and ID documents must match for complete verification, along with user characteristics such as Date of Birth, MRZ code and country codes.
4. Modified documents
The use of watermarks, stamps and other overwriting on documents are common practices of fraudulent document verification attempts. Concealed documents, or prints in consent documents are also used by tricksters to bypass handwriting analysis.
5. Tampered documents
Scratched, torn, folded or punched documents are unacceptable and instantly passed off as invalid attempts.
AI tools for the best defence
A surge in face spoof attacks has indicated the need for better security protocols online. A secure and safer online experience will be possible if stricter and more efficient identity verification solutions, backed by comprehensive AI and HI technologies, can safeguard sensitive customer information and provide a complete fraud cover.
Synergised Human Intelligence & Artificial Intelligence
While machines are learning to detect sophisticated 3D attacks, it has been observed that trained human resources can detect data anomalies at a higher success rate. Verification processes are completed by experts in real-time and proof of all verifications is recorded in the form of videos for evidence. The entire process is designed to maximise user experience and make it swifter and more accurate than manual customer onboarding methods.
Augmented Liveness Detection
AI-based software employ texture based countermeasures for detecting and combating facial spoof attacks. Anti-spoofing technology is integrated into the API to check for fraudulent verification attempts. Security features that verify biological identifiers or samples for authenticity include eye/lip movement analysis, prompted motion instructions, and texture/reflection detection to analyze data collected by biometric sensors.
Features such as liveness detection, microexpression analysis and 3D depth perception can distinguish a live image from a 2D/3D image or any other digital representation of an authorised user’s face.
Amplifying security measures
The facial recognition industry is expected to grow to $7 billion in 2024 in the US, with the most common uses in financial services, healthcare, marketing and surveillance. The technology is not just being used as a measure to increase business opportunities, but also as a security measure to ensure accuracy and efficiency in processes.
Large amounts of digital data collected through authentication portals also acts as a crucial data analysis tool, making predictive analyses and forecasting future trends easier. With clients of online digital verification in a wide range of industries, the responsibility to protect user data is manifold. And AI is spearheading the race for long-term security solutions.