Is Congress Capable of Legislating Unbiased A.I.?
Perhaps the most telling exchange of Mark Zuckerberg’s 2018 testimony before Congress took place between Zuckerberg and Senator Orrin Hatch.
Hatch: You said back then that Facebook would always be free. Is that still your objective?
Zuckerberg: Senator, yes. There will always be a version of Facebook that is free.
Hatch: Well, if so, how do you sustain a business model in which users don’t pay for your service?
Zuckerberg: Senator, we run ads.
Facebook did not pioneer the business model of offering a free service in exchange for user data to sell ads. Google did, and that model was nearly 20 years old when Senator Hatch asked his question to Zuckerberg. It calls into question how Congress could be expected to oversee the safe use of technology if they don’t understand how it works. The average age of a member of the House of Representatives is 58 and a Senator’s average age is 62. Their ages harken to a time when women were human computers, the US hadn’t yet landed on the moon, and the first video game console was yet to be invented.
Can Congress Effectively Oversee Technology?
Humans have the potential to use the tools of the Fourth Industrial Revolution for better or for worse, and governments around the world are rushing to put safeguards in place to protect against malicious use of advanced technologies. Consider the following three points in sequence:
- In the last three years, we’ve generated 90 percent of the world’s total data supply.
- These vast accumulations of data fuel advanced technologies such as artificial intelligence.
- AI has the ability to perform tasks independent of human intelligence.
It’s no wonder public policy leaders want to reign in the limits of technology’s potential, namely and most urgently in ensuring unbiased AI programming and equitable AI application.
At this point, it’s not a question of should Congress be responsible for legislating AI, it’s a question of if they are capable of doing so at all.
The Technology Sector and Congress Meet Again
The legacy of Congress overseeing technology dates back to 1972 when the Office of Technology Assessment (OTA) was created to provide Congressional members and committees with authoritative assessments of forthcoming technologies. The OTA assessments provided legislators with the necessary insights to shape laws that would ensure technologies were used for good. Although funding for the OTA stopped in 1995, legislation to oversee technology did not.
A year to the day of Zuckerberg’s testimony (on April 10, 2019), Senator Ron Wyden, Senator Cory Booker, and Representative Yvette D. Clarke introduced the Algorithmic Accountability Act. This act came as a reaction to revelations of bias in AI, particularly against women and people of color, and would require companies to determine if their algorithms are discriminatory.
Senators Wyden, Booker, and Clarke weren’t the first policymakers to introduce AI legislation, however. Most government actions in this respect have happened at the state and local levels. In 2017, the New York City Council became the first government body to pass an algorithmic transparency measure.
This legislation was a first-in-its-kind acknowledgment of AI’s error and bias-prone potential along with its threat to accuracy and fairness.
Other cities — including San Francisco, Oakland, and Somerville, Massachusetts — followed suit by banning city departments from using AI-generated facial recognition technology. (Although to be sure, these cities are in the minority and AI has already infiltrated the lives of people across the nation.) At the state level, California, Michigan, and Massachusetts are already considering bills to oversee AI. And globally, the EU, OECD, G20, World Economic Forum, and Beijing have all launched programs to ethically guide AI development.
While these actions take us closer to where we need to be, they run woefully short of the protections and accountability needed to ensure people, not computers, win in the Age of AI.
Moreover, for these actions to be effective (i.e. equitable), we need more women and diverse backgrounds in the AI field.
Congressional Success in Overseeing AI? Not Yet
In crafting the Algorithmic Accountability Act, Congress proved their understanding that AI isn’t inherently biased. Rather, it’s the data sets and programmers that often, unintentionally, embed their bias into the algorithms. Since AI is based on patterns and strengthened by data, bias grows the longer it remains unsupervised. Despite capturing this understanding, the Algorithmic Accountability Act doesn’t go far enough to provide meaningful protections for the following reasons.
1. Weak Enforcement
The bill does not prohibit algorithmic bias. It depends on the Federal Trade Commission to carry out this task through new enforcement powers, such as requiring companies to “reasonably address in a timely manner the results of the impact assessments…” Yet the F.T.C. has proven itself inept at enforcing settlements with policy violators in the past. Expecting the F.T.C. to equitably enforce new policy is an exercise in futility.
2. No Opportunity for Public Comments
Unlike most impact assessment processes in the US, this bill does not provide the public with opportunities for comment, meaning there’s no channel to gather public input. With today’s 28 point gender gap in AI, technology organizations already lack diverse members to assess the social impact of technology. We don’t need more recruiting software that favors men (Amazon) or image recognition algorithms that label black people as gorillas (Google). We need diverse, public input.
3. Lack of Transparency
Lastly, the bill fails to mandate transparency, thus keeping the results of impact assessments secret. How are we supposed to learn from the results and improve technology if we, the public, don’t have access to them?
A More Equitable Way Forward
On the same day (April 10th) as Zuckerberg’s 2018 Congressional hearing, the US recognized Equal Pay Day. Equal Pay Day signifies how far women, in aggregate, must work into the next year to earn what men earned in the previous year. Let’s use this calendar consequence to cue us in a more equitable direction as we consider what’s next for AI oversight. Here are four recommended steps to get us started in our Industry 4.0 journey to equity for all.
1. Gender Mainstreaming in AI
Just as we need to embed gender mainstreaming into fiscal policy, we should also embed gender mainstreaming into AI oversight. This would mean disaggregating AI’s inputs and outputs by gender so we can understand the effects AI has on all members of society and take preventative measures to correct for inequity. A lack of gender mainstreaming will cause us to lose sight of the potentially negative impacts AI has on different segments of the population.
2. Audit AI, Don’t Try to Explain It
We must also push for AI audits instead of AI explanations. Trying to explain AI presents generous challenges. As Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, puts it,
“How do we explain conclusions derived from a weighted, nonlinear combination of thousands of inputs, each contributing a microscopic percentage point toward the overall judgment?”
Neutral third-party AI audits provide better oversight on bias than do controlled explanations from the algorithm’s creator.
3. Reinstate the Office of Technology Assessment
Reinstating the OTA would ensure Congress has a separate body of deep expertise in advanced technologies to provide recommendations for legislative action. It would also make sure reports are conducted with gender and racial lensing. This would prevent AI from turning into a mysterious Black Box of software that assesses a 41-year-old white male and an 18-year-old black woman who commit similar crimes as “low risk” and “high risk” respectively.
4. Close the AI Gender Gap Once and for All
Finally, we must close the 28 point gender gap in AI. It is not enough to simply evaluate the algorithms after they’re created. We must close the gender equity gap in AI now to prevent and reduce the risk of algorithmic bias from the start.
AI is already being used to make life-changing decisions, such as determining prison sentences, evaluating loan recipients, and predicting the health of medical patients.
We must call on public and private organizations to improve representation and transparency in the development of AI systems.
At this moment in history, we can choose whether or not we program our biases into the future world. The question is, will we?