Algorithms Are Racist and Sexist: The Real Threats of AI, Part II
In Part I, we read about the fears overtaking the tech world, but maybe we should be focusing on other things right now.
AI has the potential to bring about the end of the human race—eventually, someday, maybe. I’m here to tell you that that day is not anywhere close, but as I hinted at, there are still many issues that should concern you.
Missed Part I? Read it here now:
We should be focusing on a different threat of AI — the medium-term risks that are present with current technologies. We hear about these a lot less, maybe because the headlines aren’t as flashy, but they pose real ethical dilemmas that we should address first before we hypothesize the loss of control over superintelligent programs.
Take, for example, disinformation. ChatGPT is currently good enough to write persuasive and factual-sounding responses. If prompted to do so, it could assist in disseminating propaganda and conspiracy theories. Even without malicious intent, language models are prone to hallucination, that is, outputting confident answers that are not rooted in real-world facts. Reliance on technology that can make mistakes can be dangerous, especially if it is integrated into everyday tools like web searches.
Other harmful effects of AI have been demonstrated in terms of gender and racial biases. Algorithms that are used today for hiring purposes are known to discriminate in favor of men. Similarly, there have been cases where financial institutions that have models for determining creditworthiness offer much lower lines of credit to women.
Additionally, predictive algorithms used in the criminal justice system are aggressively skewed against people of color, and self-driving cars are unlikely to recognize dark-skinned individuals as human beings which should be avoided. There are countless examples of ethical issues like these. Real, current examples, not just speculation.
The problem is that bias exists in real-world data that these technologies are trained on. The algorithms, however, amplify the bias by determining that attributes such as race and gender are the most important predictive factors. The more we employ algorithms like this without deeply examining the implications, the more short- and medium-term harm is caused by AI.
With these cases, though, we still have control. People have the power to examine the datasets that are used to train models, assess the ethical impacts of using them, and de-bias their programs. In doing so, we begin to understand how much AI influence we accept over our lives and decide what restrictions we want to place on it.
We have much more autonomy than we think over this technology, but action needs to be taken to mitigate the threats to humanity that do currently exist. If we wait too long, we truly might lose control, or at the very least allow biased algorithms to control our thoughts and actions.
Realizing this and making a change right now is much more effective than preparing for the end of the world. It will help prepare us better for when we do reach the frightening levels of advancement that are deemed inevitable by some.
There are real threats of AI, but most of them are already here, not in some apocalyptic future. Our job right now is to be more conscious about the world we live in — before the AI does at least.