Without Privacy, We Are Creating Systems to Discriminate

Justin Ehrenhofer
7 min readMar 21, 2022

--

The United States has a long history of segregation, and financial access is a large part of that.

In 1933, the US federal government under Franklin D. Roosevelt established the Home Owners’ Loan Corporation (HOLC). Its purpose as a New Deal program was to make it more affordable for Americans to own homes. Shortly after, the Federal Housing Administration was formed to address similar goals.

While the direct intent of these programs was good and many families benefited from the favorable mortgage terms, there was a much darker side. The HOLC issued infamous maps that marked certain neighborhoods as higher or lower mortgage security risk. In practice, this led to widespread marking of minority (mostly Black) communities as “hazardous” investments. This process is referred to as redlining, and it was made illegal by the Fair Housing Act of 1968.

Redlining in Milwaukee, National Archives (Public Domain)

While the incremental impact of these maps on the overall trend of mortgage discrimination is disputed, one thing that is well-understood is that mortgage lending discrimination was pervasive. Black families were significantly less likely to be eligible for subsidized mortgages. US government programs explicitly encouraged discrimination.

Even though redlining is now illegal, the scars from these programs remain today in effectively all US cities. Urban areas are significantly more segregated than they were in 1930. Associated Bank settled with the Department of Housing and Urban Development for approximately $200 million for allegedly denying mortgages to Black and Hispanic applicants from 2008 to 2010, among other settlements.

It is not just housing. Access to an array of financial services on favorable terms, including student loans, credit cards, and insurance, have similar histories of discrimination. These factors are often highly intertwined; the benefits of the GI Bill for cheap mortgages were technically available to Black veterans, but in practice they could not easily use these benefits because of widespread white-only neighborhoods and white-only college programs.

Such blatant racial discrimination is much less visible today than it was in 1930. But it does not mean that it is gone. Discrimination still has widespread effects, but it is a lot harder to pin down than lines on a map.

With most public cryptocurrencies out there, we have opened Pandora’s box.

Effectively all cryptocurrencies — notably not monero — leave obvious traces of transaction histories in a public blockchain record. Think Twitter for your bank account. This transparency includes a full accounting of your balances and transaction counterparties over time. This information is available to anyone to access through nodes and block explorers: your neighbor, your bank, US government agencies, and North Korean spies.

Originally aimed at thwarting crime such as darknet market proceeds and purchases, blockchain surveillance companies ingest this public data and assign risk scores to every address and transaction. More than a dozen companies, including Chainalysis, CipherTrace, and Elliptic, sell manual and automated compliance tools where you can receive these proprietary risk scores.

These risk scores are used heavily by regulators, cryptocurrency exchanges, and banks. Some purported “decentralized” exchanges use them to block proceeds from funds in sanctioned addresses from interacting on their platforms. Marathon even refused to include certain categories of high-risk transactions in its mined bitcoin blocks, before backing away from that policy shortly after.

Having a risk score sounds lovely from a “catch the bad guys” perspective. With mass surveillance of a public blockchain, where the only place to “hide” is in an obvious accounting record, it is quite easy to trace funds. While it is possible for expert criminals to sufficiently hide nevertheless, most people do not have the luxury of privacy.

This subjects cryptocurrency users to the consequences of whatever risks scores are applied to their transactions by a company that has no requirement to communicate the score or their reasoning to the user. This is not like credit reporting, where at least you can view your profile with certain companies and dispute any incorrect information. These are companies making decisions on the risk levels of individuals and businesses using their proprietary weighting of many public and private data sets, with no meaningful accountability to the user or society at large.

Here is an obvious, real illustration of companies making arbitrary distinctions to label some protocols as higher risk than others. Blockchain surveillance companies largely label no-KYC (know your customer) exchanges that do not cooperate with US law enforcement as high risk. They largely label no-KYC decentralized exchange protocols like Uniswap (without any MSB entity to contact) as low risk. In both cases, users can trade one cryptocurrency for another (and sometimes cryptocurrencies for cash) without needing an account and without sharing any personal information. However, using one of these platforms will make a user stick out as a likely criminal, whereas using the other will be marked as totally normal.

A dedicated article can be written about why this is likely the case when actual ML/TF risks are similar, but possible reasons include that many DeFi platforms have shared investors with these surveillance companies and/or are inquiring about paying for their services. These are for-profit companies after all, and they are not legally responsible for how their customers use their risk scores. More practically, as these protocols have exploded in popularity, the ability to realistically find a needle of bad assets in a haystack of liquidity is harder. I speculate it is all these factors.

While most companies sell compliance solutions focused on catching possible criminals, anyone can investigate blockchain data for any purpose. It is quite trivial to track sex workers who receive cryptocurrency payments and those who pay for their services. The same goes for gamblers, VPN purchasers, donors to charities, NFT owners, frequent customers of Black or woman-owned businesses, you name it.

An example in HBO’s It’s a Sin really clicks for me personally. A main character, Ritchie, is trying to get a mortgage as a gay man in the UK during the AIDS epidemic in the 1980s. He repeatedly lies about his sexuality as the banker asks him a series of related questions, which if answered honestly would have resulted in his mortgage being denied (this was legal at the time). The bank in this situation did not have a data point to deny the loan on those grounds, but what if there was a public blockchain record that showed Ritchie purchasing beers at a local gay bar or paying for medication or medical tests?

It is somewhat hard to fathom this situation today when most people solely use cryptocurrencies on an exchange as an investment. Most people do not go around spending cryptocurrencies at their local bar or hospital today.

Still, cryptocurrency transaction records are there, to be used by anyone for any purpose, permanently. This data will be used to discriminate against people, intentionally or unintentionally. This more subversive discrimination is effectively impossibly to prevent.

Suppose someone buys their groceries at a local Black-owned store that often services low-income customers. Lower-income individuals are more likely to have nontraditional income streams that are less-supported by evidence, for example income paid “under the table.”

When this person pays their rent with cryptocurrencies, their landlord uses a payment processor (since no one wants to receive tainted funds). The payment processor notices that the renter repeatedly sends money to an entity that receives a relatively high amount of illegal proceeds from drugs, gambling, etc. The company decides to assign a higher risk score than default for the user’s transaction profile and flag the payment, demanding ID documents for enhanced due diligence that the renter does not have. Since they cannot complete the payment, they cannot pay their rent. And the landlord does not want to accept the payment directly, since that would jeopardize their risk score with others.

In situations like these, how easy is it to specifically pinpoint the activity that raised their risk level above the acceptable limit was them shopping at an independent Black-owned grocery store instead of Target?

Now repeat this for any other factor that anyone might consider higher risk or morally objectionable, across companies and people in all industries you transact with.

For years, some people in the technology industry have claimed that artificial intelligence removes human biases from decisions. Instead, case after case shows that AI removes accountability and bakes in existing bias, while removing the ability to meaningfully detect or address its impacts.

One’s transaction history is intimate and personal. People pay for medical expenses, pay off student loans, post bail bonds for family members, and other very intimate things. What people spend their money on reveals a lot of information about their personal identity, especially as we get away from cryptocurrencies as play money and increasingly use them for commerce, donations, and remittances.

Some people may suggest that these problems can be addressed with more regulation. Who cares if these data points exist if you can properly order people not to use them? The problem is, this is much easier said than done. How can you prevent people from looking at a public record? How do you catch all biases? All it will do is encourage people to make the source of their data more obscure and more difficult to pin down. And let us not forget the obvious: the US federal government caused unconscionable harm to generations of Black and Latino families through discriminatory financial access programs in the past.

I strongly believe that cryptocurrency protocols should strive to remove these data points entirely. There is simply no defensible reason for a public record to exist with everyone’s visible transaction history. Nor are we going to get anywhere by trying to regulate industries by dictating what public transaction records can be used for specific purposes. The best way to protect people from discrimination (by companies and governments, intentional and unintentional) is to address the source issue: stop storing intimate, traceable transaction data publicly.

Author: Justin Ehrenhofer

--

--