The Future of Fraud Protection is Intelligence…and a Gentle Touch

A guest post by Lance Ulanoff

Capital One Tech
Capital One Tech
4 min readOct 29, 2018

--

Modern banks don’t just protect your data, they search the Dark Web for it.

How do you train an AI to have a difficult discussion? You make it listen. In the world of banking and personal finance, few discussions are as fraught as the ones where your bank tells you may be the victim of fraud.

Prior to the digital age, identity theft and fraud at scale wasn’t necessarily a problem, but now there are networks of illicit information living on the Dark Web (basically a part of the web hidden from traditional search and indexing methods), much of it collected in big heaping scoops from third-party companies like retailers, online services, and others that have simply done a poor job of keeping our information safe.

In 2017, more than 16 million people were, according to a 2018 Javelin Strategy & Research study, victims of identity fraud, and those are just the ones who were aware of it. For many, they’re unaware that someone is masquerading as them or that their personal bits are living on the Dark Web.

Banks like Capital One spend almost as much time scanning the Dark Web as the bad guys do, but, as I learned in a recent panel discussion with bank execs helping to manage Capital One’s conversational AI, fraud, small business and credit services, they’re their looking for evidence of customer information. Bank customers and others can use their CreditWise app, which is designed to help users monitor and protect their credit.

The first time many of them learn that something is amiss is from their bank and, sometimes, it’s an intelligent assistant that’s giving them the news.

If Capital One’s CreditWise finds customer data on the Dark Web, an automated system has to alert customers that their data is on there. Similarly, Capital One’s Eno, an AI-based intelligent assistant the bank launched in 2017, might send a text informing customers that the company’s cloud-based AI detected unusual activity on their credit card such as duplicate charges or a purchase made out of the state or in another country. Putting artificial intelligence on these tasks and using its ability to quickly identify unusual patterns, cuts the alert and communication time down significantly. But then there’s that communication.

When I asked Capital One how Eno might handle such an alert, they told me it starts with the facts, that the system found this activity, but it quickly follows up with what the bank is doing about it. Sometimes it’s a shorter message that asks if this charge is you. An SMS text conversation with Eno — provided you gave the bank your cellphone number — can clear that up quickly. Either way, bank executives told me they want the interaction to feel as reassuring as a hug, or at least a digital hug, letting the customer know that whatever’s going on, they’ll get through it together.

Eno, however, probably wouldn’t be effective in any of these tasks if consumers didn’t feel comfortable with the interactions. Capital One’s gender-neutral intelligent assistant is casual, direct, and capable of handling natural language, slang, and emojis (you can text it a thumbs-up to let it know everything’s OK with that suspicious-looking charge).

To build an effective intelligent assistant, especially one that can grow and change with customer needs, you use Machine Learning, training it with examples that it can then use when deciding on the proper opening communication and response. But that training has to come from somewhere.

Capital One likes to build new customer-facing technology by starting with intelligence they’ve already gathered from customers. They survey them to understand how they might react to new technologies like chatbots, and use a lot of real-time customer activity data.

Once they build a prototype, they bring customers into usability testing labs. This helps build what Capital One calls human-centric products. But consumers in a controlled setting wouldn’t provide the best Machine Learning training for Eno.

When I asked Capital One how they trained Eno for these tough conversations, the answer was quite simple. They had hundreds of thousands of hours of web-based customer chat log data (between human customer service reps and customers) that they anonymized and fed into Eno. They followed that up with machine learning, training the intelligent assistant on how banking conversations go and what customers do and don’t expect when talking about their banking activities.

For all Eno knows about conversation, the goal here is not to make it seem in any way human. It has, apparently, something more reliable and reassuring: an algorithmic understanding of the human condition when it comes to banking and how a nervous person might react to learning that they’re the victim of identity theft or credit card fraud.

To read more from Lance, follow him on Medium here.

DISCLOSURE STATEMENT: These opinions are those of the author. Unless noted otherwise in this post, Capital One is not affiliated with, nor is it endorsed by, any of the companies mentioned. All trademarks and other intellectual property used or displayed are the ownership of their respective owners. This article is © 2018 Capital One.

--

--

Capital One Tech
Capital One Tech

From our founding, we’ve used tech to change the banking industry. Today, our innovations are making banking better for tens of millions of our customers.