Unearthing Critical Security Flaws in Check Processing Systems

Harrison Chase
6 min readNov 10, 2020

In today’s smartphone-driven world, most of us have likely become accustomed to depositing checks into our bank account from the comfort of our home. Some studies even suggest that the convenience of online check processing systems is one of the primary reasons behind the growing shift towards mobile banking. But what if we told you these very same systems also make life easier for fraudsters looking to pad their paycheck?

As members of our team here at Robust Intelligence discovered, altering just a few pixels in an image of a check can fool state-of-the-art AI models into recognizing incorrect deposit amounts. In a recent research paper, we successfully tricked real-world check processors using software licensed from industry-leading providers who handle billions of dollars in transactions across major US banks each year. And it’s not just banks that are vulnerable — the same algorithm can be used to attack systems that process images such as passports, receipts, and license plates.

In this blog post, we provide a deep dive into our work on attacking automated check processors and how it fits into our larger mission of defending AI systems against mission-critical failures.

What are Adversarial Images?

From the iPhone’s Face ID feature to the sensors that power self-driving cars, image classification models – AI systems that take in images and identify what they represent – have become nearly ubiquitous. Which makes it all the more concerning that such models are far from foolproof. In research settings, the common practice for “attacking” an image classification system consists of subtly distorting the value of individual pixels in an image before feeding it into the model. Since the resulting noise is virtually imperceptible to the human eye, one would reasonably expect to get back the original prediction. Instead, even industry-leading models tend to misclassify images transformed in this way, outputting predictions that are clearly and, in some cases, comically incorrect.

Example of an adversarial attack. While the two images are visually indistinguishable, the pixel-level noise causes the image on the right to be misclassified as a “gibbon” with high confidence.

Such inputs, known as adversarial images, raise serious concerns about the safety of AI models deployed in mission-critical settings such as biometric authentication, medical diagnosis, or autonomous vehicles. The real-world repercussions of security vulnerabilities in these systems could range from identity theft to serious injury or even death. It’s not surprising then that designing, understanding, and protecting against adversarial attacks has received a lion’s share of attention by the AI research community in recent years.

Challenges in Attacking Check Processors

Of course, even if you’ve heard about adversarial images before, you might be skeptical that such approaches can actually be used to fool real-world systems. Indeed, successfully attacking check processing systems turns out to be far trickier in practice than the above example might suggest due to domain-specific challenges. The reason for this is two-fold.

First, the vast majority of research on adversarial attacks has focused on color or grayscale images, where every single pixel can take on a wide range of possible values. This large attack space effectively allows an attacker to “hide” the adversarial noise. Perturbing a single white pixel from #FFFFFF to #FFFFEE, for instance, would not lead to a visible change in the image, but many such subtle perturbations applied together can easily push the image over the model’s decision boundary, triggering a misclassification.

In the case of check processing systems, however, images of checks are binarized before being fed into the model that outputs the transaction details. As a result, every pixel in the model input can take on just two possible values: 0 (black) or 1 (white). Existing attack algorithms either can’t operate on this restricted search space or result in images where the adversarial noise is patently visible to humans. Attacking binarized inputs requires additional heuristics to tackle this issue; for instance, one can constrain the algorithm to only modify pixels on the boundary of black and white regions of the image, where they’re less likely to be noticed.

The second challenge of working with checks is that all digital check cashing systems run two independent models, known as CAR and LAR (Courtesy and Legal Amount Recognition), on every input. These two models, which read the transaction amount written in numerals and words respectively, work to verify each other’s predictions; in order for a check to be successfully processed, CAR and LAR must output identical transaction amounts. Therefore, constructing an adversarial attack from an image of a check requires one to modify the image in a way that fools both models into making the same exact misclassification. While turning the digit 1 into a 7 might be relatively straightforward, turning a check for $1,000 into one for $7,000 would also require a fraudster to perturb the words “one thousand” such that the LAR model now reads the phrase as “seven thousand.”

The SCAR Attack Algorithm

At RI, we were able to develop a novel attack algorithm called SCAR that can exploit vulnerabilities in check processing systems despite these domain-specific challenges. Given a binarized picture of a valid check, SCAR is able to simultaneously fool CAR and LAR models into making compatible misclassifications while modifying just a small handful of pixels. For example, flipping the pixels shown in red below caused an industry-standard check processing system to recognize a $701 transaction from a check made out for just $401:

A check for $401 misclassified as $701 with high confidence following a small number of pixel flips (shown in red).

Crucially, SCAR is able to construct such adversarial attacks given only black-box access to the underlying image classification models. In other words, SCAR assumes no knowledge about the internal workings of the AI system it attacks; the only information it receives from the model is the final prediction for any input image. Such black-box attacks are of much higher interest from a security standpoint, since fraudsters rarely if ever have white-box access to the models they want to attack.

While black-box adversarial attacks are more realistic, they often incur an added computational or financial cost in practice due to the large number of model queries needed to identify the optimal pixels to modify. In a real-world setting, for instance, a company might charge for each request to the model API endpoint and even flag users who repeatedly query similar images. In order to circumvent these issues, SCAR limits the total number of queries it needs per attack using a few different heuristics, such as flipping pixels located close together or at the boundary of letters and words.

Why Check Attacks Matter

Adversarial attacks have certainly become top-of-mind for the AI research community in recent years. Unfortunately, they have not received nearly the same level of attention by industry players, who often treat such vulnerabilities as a largely theoretical concern. As our success attacking real-world check processing systems goes to show, this misplaced sense of security can have damaging real-world consequences — consider that in 2018 alone, check fraud accounted for 1.3 billions of dollars in losses for banks across the US. As fraudsters continue to become more technically sophisticated, the financial risks associated with substandard AI security practices will only keep growing.

Of course, the ramifications of AI vulnerabilities range far beyond mobile check processing. Binary image classification systems are also utilized for recognizing license plates, processing invoices, and extracting information from insurance documents, among others. Whether due to an innocuous data error or a SCAR-like attack by a malicious adversary, security failures in such settings could incur a potentially catastrophic legal or human cost. As more and more companies and industries look to capitalize on the promise of automation, our increased dependence on AI will open the door to a wealth of novel threats, attacks, and risks.

At Robust Intelligence, we have the technical capabilities to identify and defend against these vulnerabilities. Our goal is to place the security and robustness of AI models used in high-stakes decision making at the forefront of innovation, rather than on the back-burner. We hope that such research helps spark greater awareness across the industry so that, together, we can build towards a safer and more responsible AI-powered future.

--

--