The Current Identity Verification Tech Stack Won’t Survive AI and Real Time Payments

Jelena Hoffart
4 min readOct 26, 2023

--

In the coming years generative AI will be a native part of every software company and tech-enabled business. It will not only change the types of tasks we ask software to perform, but potentially the way software itself is built. We will see new functionality, interfaces, software stacks, and business models.

The reality is that while AI will cause massive gains in productivity, efficiency, engagement, and personalization, the total picture is likely to be much less rosy. AI will also cause massive “gains” in cyber-attacks, scams, fraud, and data breaches.

The rise of generative AI has equipped fraudsters with an extremely potent tool to create even more sophisticated scams. Even more worrisome, there is seemingly no end to the ways AI can be used to automate and exploit existing fraud technology and do so at scale. While we can’t predict all the cutting-edge fraud evolutions, let’s start with what we do know about the ways fraud is conducted within an industry we know well, financial services.

According to the 2023 Data Breach Investigations Report by Verizon, 86% of basic web application attacks on financial services companies are from utilizing compromised credentials like an email and password. As expected, once a malicious actor has those stolen credentials, they leverage them to gain access to a consumer’s personally identifiable information (PII) like a bank account number or social security number.

The overwhelming motive to get access to these credentials is financial gain. Bad actors plan to either sell these exploited credentials and PII directly on the dark web or take over the bank account themselves, which unlocks a host of opportunities to steal the contents or launder money. For example, the “market” price for an already opened and compromised Robinhood account is $150!

Figure 1: Dark web listing for compromised Robinhood account; Source: https://www.privacyaffairs.com/dark-web-price-index-2023/

Malicious actors obtain our passwords in part through password spray attacks. This is when an adversary tries a small set of target passwords across a wide array of accounts in an organization. Scary, right? But we can take comfort in knowing that this is a relatively manual and time-consuming process for scammers today, right? Wrong.

One way that fraudsters can leverage generative AI is by asking it to generate a list of the most used passwords that one could expect to see tailored to a company in a certain city, industry, etc. This is an exceptionally simple example of the way generative AI can be used by malicious actors to conduct fraud at scale. In the same way that we leverage AI to improve our productivity and efficiency — fraudsters will too! Productivity gains are a key objective in fraud as conducting larger scale scams equals more credentials stolen, which results in more money in the fraudster’s pocket.

Deepfakes, AI-generated synthetic media, of celebrities like Tom Cruise and Gwyneth Paltrow, are commonplace in meme-culture. There is seemingly no end to the artistic creation that can be spun up using deepfake tech. However, we see a new and more sinister use case for deepfakes than memes or alternative art. For example, one of the main ways that financial services companies verify potential users is utilizing a process called document verification and selfie liveness detection. This method prompts a user to take a photo of a passport or driver’s license which is then matched with a selfie taken by the same user. The goal is to prove that the identity provided is real and that the person opening the account is that same person — which creates a tremendous opportunity for leveraging deepfake tech.

We’ve heard continuously from founders battling fraud that the quality of deepfakes and the quantum of these new attacks has reached the point of no return over the last few months. AI has enabled the use of highly realistic deepfake pictures and videos that are already spoofing this existing technology, which is seen as the highest security method of person verification we have today. Worse yet, the U.S. has just launched real time payments. Not only can fraudsters leverage AI to conduct fraud at scale and innovate their methods at lightspeed, but now they can access our digital accounts and move money out — instantaneously and irreversibly. These are just two examples of the many ways AI is empowering fraudsters.

In short, AI has already rendered some of our best-in-class technology useless and there is no reason to think this will change in the future. Unfortunately, our adversaries are just as quick to adapt to the newest technology and find a new vantage point from which to attack. Every company globally that enables consumers to interact and transact digitally is ill-equipped to battle fraudsters with the solutions on the market today. For example, in eCommerce alone, US companies experienced $41B of fraud losses in 2022 which is expected to grow to $66B by 2028.

The threat of AI and real time payments to exacerbate the already significant levels of fraud is not just immediate, but in our minds, existential. However, as venture investors, these are the exact seismic shifts we look to invest alongside, which will no doubt produce generational businesses.

--

--

Jelena Hoffart

I write about and invest in all things identity, fraud, security and compliance