Expanding the Audience of Financial Health Tools Using Machine Learning

Helping Millions of Americans “Brigit” Until Their Next Payday

This blog was written by Brian Konzman SJ, Nicki (Nicola) Kornbluth, Christine Mockus, Brian Nordyke, Neal Tan, and Tianyan Wang as part of the “Analytics in Action” course at Columbia Business School.

An estimated 100 million Americans live paycheck-to-paycheck; each year, more than 40 million pay an overdraft fee to make ends meet. Unfortunately, most financial institutions don’t relieve that common anxiety. Enter Brigit. With transparent, fair, and simple financial tools, Brigit helps everyday people build a brighter financial future.

Brigit digitizes and humanizes the cash advance industry, otherwise often served by expensive overdraft fees and predatory payday lenders. Customers avoid hundreds of dollars in overdraft fees annually through Brigit’s small, but vital, instant advances. Alternative data allows the company to do this without exposing themselves to outsized risk, while stepping away from the barriers FICO scores create.

In just two years, Brigit has already helped more than 1 million people feel more financially secure. After seeing the impact their product has had on customers, Brigit is working to extend such advances to even more people.

That’s where we, a team of data-loving engineers and MBAs from Columbia University, come in.

Looking to introduce even more innovation to their workflow, Brigit partnered with us through Columbia Business School’s Analytics in Action course to find a way to approve more users for Brigit advances. In order to do so, we needed to build a model that predicted the likelihood of default on each advance more accurately. With more accurate predictions, Brigit could confidently accept additional users without concerns about skyrocketing default rates.

With Brigit’s encouragement, we decided to let their data speak to us and avoid influence from their existing models. We hit the ground running with data cleanup and feature engineering. In addition to imputation, normalization, and creating new features from existing data and user-level history, we also brought in some external data. For example, we used the Plaid API to match the financial institution IDs we were given to the institution names, then used web-scraping to add additional data. This allowed us to categorize those institutions by attributes such as size and institution type (e.g. neo-banks). Those categorical features turned out to have significant predictive power.

The data itself revealed many interesting insights about Brigit’s users. For example, it turned out that (within limits) users who had previously had very negative bank balances were actually less likely to default. This seemed strange, but supplementary research uncovered that there are in fact two types of overdraft behavior — users who purposefully overdraft to pay big bills on time, and users who were simply unaware of their bank balance and overdraft on normal purchases. The former were making prudent use of available cash flows, while the latter might reasonably be regarded as somewhat less responsible borrowers.

Fortunately, we also found that most people pay back their advances! While this is great news for Brigit’s business, it does present a challenge for machine learning models: a highly imbalanced dataset. Because there are fewer instances of users defaulting, it is harder for a model to “learn” what the behavior looks like. To address this, we tried to rebalance the dataset using oversampling. Because it was unclear how similar the two classes of data (default and non-default) were, we tested a variety of methods to avoid overfitting, including ADASYN, SMOTE, and random oversampling. From there, we also tested a variety of models, including logistic regressions, random forests, boosted decision trees, and even an anomaly detection model (often used by credit card companies to detect fraud).

Initially, we optimized our models to most accurately predict defaults since they present a large financial risk to Brigit. The math here is simple: the fewer people who default, the more people Brigit is able to help with the same capital. To calculate meaningful metrics, though, we had to artificially set approval thresholds. For example, if our model predicted a 47% chance of defaulting, we had to decide if the user would be approved. This made it very hard to compare results across models.

For this reason, we started creating lift curves. With this type of curve, we were able to visualize the number of defaults across a variety of approval thresholds for each model. This helped us compare the tradeoff between acceptance rate and default rate across models.

With this method, we found that logistic regression and AdaBoost, combined with random oversampling, were the strongest models. They allowed us to let in the most users at any given default rate. Since logistic regression is far easier to implement and communicate with users, we focused on that model moving forward.

After weeks with our heads down in the data, we have produced a final model and projected our impact. We estimate that, with the help of our new model, Brigit can reasonably accept 10% more users than they currently do while maintaining the same default rate. This will allow Brigit to expand its reach, and help 10% more people break free of financial stress and build a brighter financial future.

--

--

Nicki Kornbluth
Analytics in Action @ Columbia Business School

Data Enthusiast | M.S. Business Analytics Candidate at Columbia University