Machine Learning Bias Is Not a Partisan Issue
On Tuesday, a video of Alexandria Ocasio-Cortez discussing bias in machine learning made its way around the Twitterverse. Specifically, she said that “[Algorithms] always have these racial inequities that get translated, because algorithms are still made by human beings.”
Ocasio-Cortez is one of the first politicians to highlight on the national stage the importance of the issues of fairness and bias in the development of machine learning algorithms. The challenge of AI governance continues to stump industry and academia alike. With no long-term solutions, a technology that would historically be regulated by the government is instead regulated by peer pressure. For those within the machine learning community, Ocasio-Cortez’s statements may have seemed like an opportunity to engage with lawmakers and shape legislation on an issue that presently has and will continue to have significant societal impacts.
Except her statements wasn’t framed as a legislative goal for a bipartisan Congress.
The trending video in which Ocasio-Cortez discusses machine learning bias was shared by Ryan Saavedra, a reporter at the Daily Wire. Saavedra’s past work has been critical of Ocasio-Cortez, with ledes including “Ocasio-Cortez Uses Violent Sexual Term To Describe Her Far-Left Agenda,” “Ocasio-Cortez Claims ‘GOP’ Thinks Her Dancing Was ‘Scandalous,’ Gets Roasted For Lying,” and, around the same time as his tweet, “AOC SNAPS: World Could End In 12 Years, Algorithms Are Racist, Hyper-Success Is Bad.”
This critical perspective translated in his tweet, which framed the issue of combatting machine learning bias as socialist in a tone that could be described as derisive. Concerns about framing machine learning bias as a “liberal” issue surfaced soon after.
Let’s be clear: Combatting fairness and bias in machine learning is not Socialist. It’s not Democratic. It’s not Republican. It is political, because AI governance will likely require legislative action on the part of the federal government and/or governmental agencies.
In other words, it’s not a partisan issue.
We have been down this road before. Climate change used to be a bipartisan issue until the mid-1990s, when industry lobbying succeeded in creating two sides on an issue where there is only one. Scientists on the Environmental Protection Agency’s science advisory board were replaced with industry advisors who downplayed links between air pollution and public health. Under a Republican president and a Republican-led Congress, we’ve seen extensive rollbacks in environmental policy, including the U.S. withdrawal from the Paris Climate Agreement, which have handicapped our abilities to limit the effects of climate change on public health and the environment.
If allowed to continue being used on the public, bias in machine learning might be similarly catastrophic for at-risk populations. The evidence is already here — from biased predictions of recidivism rates to incorrectly classifying black women as gorillas to potentially malicious uses of deep fakes to re-identification of individual people from anonymized health data. If biased machine learning is further integrated into our daily lives, marginalized communities will pay the price.
Some may argue that it is only one tweet, shared by one person. Except that’s not how social media always works. 24/7 hyper-connectivity has contributed to generational burnout and a deeply concerning growth in partisan divide. It not easy to predict whether an idea on social media will fizzle out or become one of the defining issues of an era.
Right now, we need all sides — developers, ethics experts, theorists, scientists, doctors, historians, Republicans, Democrats, and everyone in between from all geographic and socio-economic groups — to come to the table and begin making decisions on AI governance. These decisions will be difficult, and will reveal the biases that make up every one of us. Adding partisanship into the equation will only ensure that more people suffer in the process.