Preventing artificial intelligence-based discrimination in housing practices requires new federal guidance

Brittany Baur
SciTech Forefront
Published in
5 min readJul 21, 2022

Brittany Baur and Maia Gumnit

Executive summary: Artificial intelligence (AI) is being employed throughout the housing market at an increasing rate, which directly impacts tenants and potential homeowners. These AI models learn to make decisions related to tenant screening and home loans based on existing data. However, if the training datasets reflect discriminatory patterns, the AI model itself can perpetuate discrimination. This type of discrimination is widely considered to be a disparate impact violation, but due to the complicated nature of AI algorithms, we recommend that the Department of Housing and Urban Development (HUD) provide AI-specific guidance on how to identify and prevent AI-based discrimination in the housing market.

Artificial Intelligence can perpetuate discrimination in the housing market

Artificial Intelligence (AI), an umbrella term describing the use of machines to solve specific problems based on existing information, can be incorporated into a variety of decision-making tasks. Increasingly, it is being employed in ways that directly impact people’s lives, such as the criminal justice system, the healthcare system, and the housing market. Specifically, housing providers use AI within the mortgage, home loan, property marketing, and tenant screening process. While AI is often touted as more objective and efficient than human decision-making, it can also unintentionally perpetuate harmful discriminatory practices.

There are numerous documented examples of AI-based discrimination. Notably, Berkeley researchers found that lenders using AI make loan-pricing decisions that discriminate against borrowers of color. Since AI models learn how to predict outcomes based on existing datasets, they will perpetuate discrimination if presented with biased data. Specifically, the use of proxy variables, which are not themselves protected characteristics but are correlated with protected characteristics (ex. zip code and credit score), can reflect discriminatory patterns.

Current disparate impact guidance is insufficient for preventing AI-based discrimination

Discrimination can be either intentional (disparate treatment) or unintentional (disparate impact). On its surface, a policy or AI algorithm may seem neutral, but it is considered a disparate impact violation if it unintentionally negatively impacts a protected class. The Fair Housing Act of 1968 bans both disparate treatment and disparate impact, and in 2013, HUD standardized regulations related to disparate impact claims (“2013 rule”). In a landmark 2015 Supreme Court case, the court ruled that disparate impact claims are indeed cognizable under the Fair Housing Act. However, the court also stated that the plaintiff must prove that the defendant's policies are discriminatory.

Although the 2013 rule did not explicitly address AI-based discrimination, AI would most likely fall under disparate impact analysis. In 2020, HUD amended the 2013 disparate impact standard to provide more specific guidance on determining disparate impact and include AI-related guidance (“2020 rule”). The 2020 rule specified that the plaintiff must show that the practice (or AI) is the direct cause of adverse effects on a protected class and that the practice has no valid purpose. Furthermore, housing providers could avoid liability by using third party vendors to bypass direct development and assessment of the predictive model. In effect, the 2020 rule places a high burden of proof on the plaintiff and provides avenues for housing providers to avoid liability–creating two major roadblocks for proving disparate impact via AI-based discrimination.

The Biden administration has enacted a moratorium on the 2020 rule and is currently enforcing the 2013 rule. However, the 2013 rule includes no language regarding AI-based discrimination, making it difficult to hold housing providers liable if they leverage discriminatory AI algorithms. Compared to policy-based disparate impact discrimination, it is more difficult to understand the steps through which AI algorithms arrive at conclusions (“black box” AI), especially for the overwhelming majority of plaintiffs lacking AI expertise. AI can discover complicated relationships between seemingly unrelated variables by leveraging massive and complex datasets. Until recently, non-traditional data such as social media or educational attainment were not used in decisions regarding housing. Therefore, in addition to the guidance provided by the 2013 disparate impact rule, HUD should consider AI-specific guidance for disparate impact claims.

What can be done?

Even though many experts agree that AI would most likely fall under disparate impact analysis, if the 2013 rule continues to be enforced unchanged it would still present some ambiguity regarding AI cases. However, if the moratorium is discontinued and the 2020 rule is reinforced, it would leave loopholes making it difficult to hold housing providers liable in AI discrimination cases. We recommend the 2013 rule be reinforced, with all of the following AI-based policy considerations implemented:

  1. Add AI-specific language guidance for defining disparate impact claims. This option would help clarify how to determine a difference of outcome on groups affected by AI, and how those standards apply across different contexts.
  2. Require monitoring, testing, and searching for less discriminatory alternatives (LDAs). Guidance could be provided for how to monitor and test inputs and outputs of the AI algorithm, as well as the model. For example, the guidance should specify that the inputs are representative. The guidance should also outline how to search for less discriminatory alternatives (LDAs) for AI algorithms, in cases where the same legitimate outcome could be achieved by other means.
  3. Clearly define guidance aimed at third party vendors. Housing providers often consult third-party vendors to develop AI, which potentially shields the provider from liability in the 2020 rule. HUD could make clear that regulations also apply to third party vendors, enforce that the provider would not be shielded if the AI is deemed discriminatory, and specify that the provider be able to explain decisions made by the AI algorithm.
  4. Release mortgage lending data for ethically-aligned research and provide representative practice datasets. HUD could enforce the release of data to encourage public research, expand mortgage lending databases, and house its own testing data sets that are regularly updated to account for population shifts.

Additional Information

For more information about the potential negative consequences of the 2020 rule, see this article by Professor Valerie Schneider. For more ideas of potential policy prescriptions, see this comment by the NCRC and this Brookings Institute brief. For more information about disparate impact claim LDAs, see this Justice Department Manual.

--

--

Brittany Baur
SciTech Forefront

Senior Data Scientist, University of Michigan Medicine. Ph.D. in Computational Sciences & MS in Bioinformatics, Marquette University.