How the FTC Advances Equity in AI by Protecting Consumers

As British mathematician Clive Humby proclaimed in 2006, “Data is the new oil.” Today, more than ever, we are beginning to comprehend just what Humby meant. Companies are cashing in on data collected from users to fuel services based on artificial intelligence (AI) technology.

As AI revolutionizes the modern economy, the government is often seen as slow to react, leaving consumers vulnerable against powerful new technology that may replicate and entrench discriminatory patterns found in real-world data.

Initial regulation of AI by the U.S. government has come from perhaps an unexpected place: The Federal Trade Commission. In April 2021, in what was called a “shot across the bow” by University of Washington School of Law professor Ryan Calo, the Commission published an official blog post entitled “Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI.”

The Commission shocked observers by formally recognizing the link between equity and Section 5 of the FTC Act. The FTC’s post stated that “The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of — for example — racially biased algorithms.”

The post explained that, in addition to the FTC Act, the Commission can bring enforcement actions against the growing mass of companies using AI under the Fair Credit Reporting Act (FCRA), enacted in 1970, and the Equal Credit Opportunity Act (ECOA), enacted in 1974. Both of these laws address automated decision-making, and financial services companies have been applying them to machine-based credit underwriting models for decades, according to an earlier April 2020 FTC Guidance blog, “Using Artificial Intelligence and Algorithms.” The FCRA, for example, regulates data used to make decisions about consumers — such as whether they get a job, get credit, get insurance, or can rent an apartment.

The FTC’s Turn Toward “Algorithmic Injustice”

In recent years, FTC commissioners have sounded alarms about “the dangers of flawed algorithms.” In extensive remarks delivered to the UCLA School of Law in January 2020, Commissioner Rebecca Kelly Slaughter outlined how to regulate AI to “promote justice and expand opportunity” while preventing “the entrenchment of algorithmic decision-making tools that produce the same biased outcomes — or worse — that we are striving to reduce.” But the FTC’s regulatory powers are limited to enforcing existing laws, and absent federal legislation on AI and equity, it can only do so much under laws that were passed 50 to 100 years ago. In addition, the FTC is a law enforcement, not law-making agency, and thus can only act on complaints or discoveries of violations, and on a limited budget of limited resources.

The blog post, however, laid out certain principles that it says follow from the application of existing law to new AI fact patterns:

First, companies should “Start with the right foundation” and understand the datasets that are fueling the AI. They should ask themselves: Is there missing information? Does this dataset represent a diverse group of individuals? Are there any gaps that can be identified?

Second, “Watch out for discriminatory outcomes.” Do the results perpetuate protected class inequity? Has the algorithm been adequately tested?

Third, companies should “Embrace transparency and independence” by publishing results, unveiling data source code for inspection, and subjecting their work to independent audits.

Other recommendations include “Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results,” “Tell the truth about how you use data,” and “Do more good than harm.”

In its last advisement, the Commission leaves no uncertainty as to its intentions: “Hold yourself accountable — or be ready for the FTC to do it for you.”

More Than Just Words

But rather than only saber-rattling, the Commission’s blog post has been accompanied by action.

In Flo Health Inc., Flo Health, a fertility app, collected user data about sexual activity, menstrual cycles, mood, and PMS symptoms, so users could “take full control of [their] health.” The app’s customers, however, weren’t the only ones interested in taking control of personal health data. Contrary to its own privacy policy, Flo Health disclosed users’ “App Events” with the word “pregnancy” in the title to third parties including Facebook and Google. In June 2021, the FTC finalized a settlement that will require Flo Health to notify affected users about the disclosure of their health information and instruct any third party that received users’ health information to destroy that data.

In a similar case, Everalbum, Inc., the photo-sharing app Everalbum promised its users that if they deactivated their accounts, it would promptly delete their photos — but in fact, it went back on its word and kept users’ photos to use as training data for developing its facial recognition system, particularly for the purpose of better identifying the faces of Asian males. According to the terms of the FTC’s settlement with the firm reached in May 2021, Everalbum must delete not only the photos of users who deactivated their accounts, but also the algorithms it developed using those photos.

On December 10, 2021, the Commission published an advanced notice of proposed rulemaking titled “Trade Regulation in Commercial Surveillance.” The purpose of the rulemaking is to “curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”

Protecting the interests of consumers that might otherwise be left behind is a touchstore of Chairwoman Lina Khan’s agenda. The Commission listed racial equity as a top concern in its Draft Strategic Plan for Fiscal Year 2022–2026. Specifically, one of its four objectives in protecting the public from unfair practices is to “Advance racial equity, and all forms of equity, and support underserved and marginalized communities through the FTC’s consumer protection mission.” The FTC further promises to “bring enforcement actions to stop unfair and deceptive practices, including violations of ECOA that disproportionately affect historically underserved and marginalized communities.” How it does so will continue to have broad implications for the large numbers of industries that increasingly rely on AI and the public.

This blog post is part of SCU’s 2022 AI, Equity, and Law Speaker and Blog Series, curated by Professor Colleen V. Chien and jointly sponsored by SCU’s High Tech Law Institute and Markkula Center for Applied Ethics.

--

--