Should Artificial Intelligence become a neutral technology?

Pratheeksha Nair
prathena
Published in
12 min readJul 17, 2018

--

William Taylor defines AI as “a programming style where programs operate on data according to rules in order to accomplish goals” [1]. This definition being quite old (1989) however disregards the research that AI scientists are carrying out more recently. Today, researchers talk about creating machines that behave like humans. Perhaps recreating human intelligence is their end goal. Taylor mentions in his book [1] that ideally, machines should be designed to help humans with menial tasks so that they can focus on the more important aspects like the designing of solutions. He gives the example of Auto CAD (Computer-Aided Design) systems which act as electronic drafting boards. These act as substitutes for T-squares, pencils and drawing boards. These systems however, do not “aid” the design process at all. Designing any solution or model would require thorough understanding of the problem, the limitations at hand and a good sense of imagination, none of which these Auto CAD systems possess. This example is only from a primitive era of computing. Contemporary research has progressed to computers with actual designing capabilities.

One may say that the goal of AI as a technology must be to create value for the society and this value created can act as a measure of its success. Oftentimes the motivation behind automation and mechanisation is not only to curtail human efforts but also to reduce biases that arise out of human conduct. “Humanly factors” like emotions, ego or fatigue may hinder the seamless delivery of services like one promisable by machines. One example would be that of a hiring officer who has to decide between two candidates (a man and a woman) for an influential position and on learning that the woman has a small kid at home, assumes that she would prefer a part-time schedule and denies her the position. The roots of such unfounded assumptions are associated with the kind of social factors involved in the shaping of his life. In present times when cutting-edge research is focused on building machines that emulate human intelligence, it is important to think whether such bigotry, sometimes well intentioned, also gets reflected.

Some studies conducted at Princeton University raise concerns of how AI as a technology may be used intentionally, or unintentionally, to perpetuate the biases that characterise human institutions. An example of this is the automatic attribution of “doctors” as being males and “nurses” as females, or associating words like “marriage” and “parenting” to women and “salary” and “profession” to men by Natural Language Processing tools. However, it may seem interesting to wonder whether it is necessary to eliminate these complex biases and stereotypes. Some scholars believe that these biases need to be treated as part of the language [2].

This study mainly tries to analyse two philosophies. One claims that AI needs to be an impartial and completely transparent technology. Its decision making capabilities need to depend entirely on logic and not reflect any of the biases and preferences exhibited by humans. The second philosophy talks about an AI which may be expected to behave a lot like humans and hence demonstrate some of the inherent biases present in the human language. The implications of these biases and the societal dynamics involved in negotiating with them are discussed.

Domain of study

The lack of a precise and universally accepted definition for AI has caused research in this field to grow vastly and steadily over the past century [3]. Some of the prospering research domains of AI include expert systems, robotics, fuzzy logic and natural language processing [4]. An amalgamation of their research outputs has led to the development of AI applications in transportation, healthcare, entertainment, public safety, education and recommendation systems to name a few. This study explores how the application of AI in prediction systems can lead to biases in this field keeping in mind the two philosophies mentioned before.

In the scope of this study, a prediction system is defined as a generalized AI engine that analyses patterns and trends in previously executed processes and uses its “intelligence” to predict the future outcomes of similar processes. An example of an application of such a prediction system is the one used by the U.S Government where prisoner details are fed as input to the system and the likelihood of recidivism is produced as output [3].

Bias in machines and humans

In AI, bias has a mathematical definition of “prior information” which is considered a prerequisite for intelligence [5]. However, when this prior information is obtained from sources that themselves hold biases (both inherent and otherwise), the results are sometimes problematic. Aylin et al. calls such biases “prejudices” and claims that dealing with them requires sound knowledge of the society[2]. It is important to understand what sort of biases are being dealt with here. Broadly they are of three types, namely inherent, intentional and unpredictable biases [6]. Each of them have been explained using examples by Tyagarajan in his blog about “Biased bots” [6].

Consider any AI-based assistant engine like Apple’s Siri, Google Now, Microsoft’s Cortana or even Amazon’s Alexa. All of them are women or rather, use a woman’s voice. Perhaps this is because their developers felt that a women suits the role of an assistant better than a man, hence perpetrating the mindset of a once (or maybe even current) patriarchal society [6]. Such biases are unintentional and are hence called inherent. They are deeply embedded in the minds of people without them explicitly having to think about them. This was an example on sexism. The dangers of inherent bias can even cause underrepresentation of whole cultural and racial groups through AI [6].

Image from : sciencemag.org

Experiments show that when a company like Facebook tries to filter and control the sort of feeds appearing on your home page, the emotions of their users can be manipulated to some extent [6]. For example, Facebook content and ads in favour of one political party right before elections can be quite impressionable on young adults viewing them. There also exist dangers of propagation of racial stereotypes just because that is the view of some sections of Facebook users. Such biases may be introduced intentionally because they are aligned with the likes of somes users or for better selling of products.

An important element of any AI system’s intelligence is learning from the world’s data. Almost all AI algorithms are trained on vast amounts of data collected from Internet users around the globe. An Implicit Association Test (IAT), as introduced by Greenwald et al. (1998) [7] was used by the study in Princeton to demonstrate that AI systems incorporate human biases. The IAT represents textual information in the form of numeric vectors using some semantics related to word frequency. Closeness of words implies more association. Their study showed that the word “programmer” was closer to “man” than to “woman”. Also, European American names were closer to pleasant words like love, peace, freedom whereas African American names were closer to unpleasant words like filth, ugly, evil. Such biases are completely unpredictable.

Each of the examples mentioned above have been picked from what are basically prediction systems. The Facebook feed generator uses an underlying prediction algorithm that predicts a score for various posts based on previous posts liked by the user and displays to him the highest scoring content. Even AI assistants run a set of prediction algorithms to anticipate user queries and responses and prepare results accordingly. The disturbing fact to notice here is that these complex algorithms are trusted to be designed in a pure and bias-free manner and if the outcomes predicted by them are not taken with a grain of salt, it could lead to disastrous results.

Implications of a biased AI

Studies show that the semantics of words used in common language reflect consistencies in culture which may happen to be prejudiced [2]. The processing of these words occurring in natural languages forms a basis for text based prediction systems. Established from the examples mentioned above, it is a fact that some biases are rooted in language and such prejudices are difficult to address [2]. Joanna Bryson, a computer scientist at the University of Bath warns that unlike humans, AI algorithms are unequipped to rectify the learned biases as they are not driven by morality and hence have a greater potential to reinforce them [8].

Different groups based on demography or geography may use different choices of words and dialects even on platforms like Facebook. In the prediction systems used by Facebook that exhibit biases as discussed before, an implication of this would be the exclusion of such language used by minorities from being used as training data [12]. Prediction systems used by the U.S Government to predict policing to reduce efforts in crime prevention, analyse large sets of historical crime data and predict top crime hotspots. For the U.S, this implies more policing in poorer, non-white neighbourhoods while the rich, white neighbourhoods are less surveilled [11]. As a result, more arrests happen from such areas and again the same data is fed into the prediction system for training. This leads to an infinite feedback loop. A study at Carnegie Mellon University showed that the prediction system used by Google showed ads of high paying jobs more to men than to women. As a result, more men apply for the job and end up getting the job. This further adds to the data which already shows that there are more men who took up the job. Subsequently, the ads show up more often for men than for women. This could have been intentional or just an unintended outcome of the algorithm involved [11]. Either way, this reduces highly capable women from applying for jobs whose ads they never see. Even the prisoner recidivism prediction software mentioned before denies parole to prisoners because the algorithm is biased against their racial profile.

Crawford says that AI reflects the values of its creators and hence inclusivity is important [11]. In the absence of this, machine intelligence will mirror a narrow and biased perception of society. He says that it is important to address the current implications of these biases on society having lesser power. Tyagarajan suggests that AI needs to be made more transparent and this can be done by including diversities over races, ethnicities, sexes and cultures [13]. Contrarily, studies done in Princeton conclude that eliminating bias is equivalent to eliminating information and cannot be done without ample thought and analysis [2]. It says that an AI system stripped of all biases would have only an incomplete understanding of the world and would lead to compromise in meaning and accuracy. Such a system will also have trouble adapting to changing societal values of fairness [2]. On an interesting note, Crawford mentions in his blog “Artificial Intelligence’s White Guy Problem”, that the current debate of threats posed by a super-intelligence involves only white male debaters whereas for those marginalized social groups, the threat is already here [11].

An unbiased AI

There exist works on improving fairness of machine learning algorithms that tries to avoid such biases discussed [9][10]. In December 2015, Elon Musk and his group started an open-source AI research company “Open AI” with the mission of distributing AI as evenly as possible [13]. Perhaps this was a conscious attempt at reducing the biases that AI held by trying to include more training data. However, like any open-source technology, this would also exclude certain social groups that do not have access to the open source community. More often than not, these social groups are constituted by the underrepresented communities against whom the biases are in the first place.

Luke Muehlhauser, the executive director of the Machine Intelligence Research Institute, thinks that there is no way to be sure whether the current values are best for humanity in the long run [13]. In such a situation, what good will feeding human biases to these artificial systems bring? The study done in Princeton on the topic, mentions that the main advantage of AI systems is that whatever errors or biases are present can be made explicit and hence subject to correction [2]. It reveals that the biases aren’t about some applications of AI but in the basic representation of knowledge itself. Moreover, who can correct these prejudices? Societal understanding of ethical and prejudice constantly changes and using the discretion of one or more social groups to correct prejudices may not be the most transparent solution.

Image from : 3plusinternational.com

As of now, there does not exist an AI which is unbiased. This is primarily because, as mentioned before, of the lack of unbiased training data. Although efforts at creating one are underway, there is still a debate on whether there must be one. The AI team at Sage Group adds an interesting perspective that AI is providing the world with an opportunity to identify and correct its biases, both intentional and unintentional [14]. Their vice president says that an unbiased AI will lead technology to shift society in the direction of equity and fairness. According to Will Knight [15], AI systems need to be designed to fit into social norms just as society has sets of acceptable behaviour. An unbiased AI, whether achievable in practice or not, implies fairer prediction systems and equal opportunities for all races, sexes and cultures. Although this is what the big picture looks like, it is important to remember that no amount of transparency can prevent some social groups from being at a disadvantage. This could be due to the lack of access to technologies or even lesser education. This further depends on institutions to employ such AI technologies in the first place, thus bringing us to the whole concept of technology not being neutral [16].

Conclusion

In this study, not the “neutrality” of AI as a technology, rather the implications on society of it being biased or not, was discussed. In artificial intelligence, there are two main schools of thoughts [15]. One talks about AI machines that are based on rules and logic which makes them transparent to examination. The other is about human-like machines that learn through observation and experience. Perhaps these fit nicely into the two philosophies mentioned in the beginning. An AI that can be explained mathematically may be treated to remove prejudices and one that is more biologically simulated will imbibe the biases inherent in humans. The debate of whether AI must exhibit biases or not is a philosophical one and outside the scope of this study. However, using the examples of prediction systems, it can be seen that either way, there are implications on society. On a concluding note, one can say that allowing a machine (biased or not) to make decisions for humans has ethical implications which need to be studied and understood in detail before responsible tasks are assigned to them.

References

[1] Taylor, W. A. (1989). What every engineer should know about artificial intelligence. Cambridge Mass.: MIT Press.

[2] Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. “Semantics derived automatically from language corpora contain human-like biases.” Science 356.6334 (2017): 183–186. http://opus.bath.ac.uk/55288/

[3] “One Hundred Year Study on Artificial Intelligence (AI100),” Stanford University, accessed December 1, 2017, https://ai100.stanford.edu.

[4] Point, T. (2017, August 15). Artificial Intelligence Research Areas. Retrieved December 04, 2017, from https://www.tutorialspoint.com

[5] Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer, London.

[6] Biased bots: Artificial Intelligence will mirror human prejudices. (2016, August 31). Retrieved

December 04, 2017, from https://factordaily.com

[7] Greenwald, A. G., McGhee, D. E., and Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology, 74(6):1464.

[8] Devlin, H. (2017, April 13). AI programs exhibit racial and gender biases, research reveals. Retrieved December 05, 2017, from https://www.theguardian.com

[9] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 214–226. ACM.

[10] Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., and Venkatasubramanian, S. (2015). Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259–268. ACM.

[11] Crawford, K. (2016). Artificial intelligence’s white guy problem. The New York Times.

[12] Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V., and Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. arXiv preprint arXiv:1607.06520.

[13] Biased bots: How do we teach them ‘good values’, and is that even possible? (2016, September 30). Retrieved December 05, 2017, from https://factordaily.com

[14] M. (2017, November 06). Why AI provides a fresh opportunity to neutralize bias. Retrieved December 05, 2017, from http://mashable.com

[15] Knight, W. (2017, May 12). There’s a big problem with AI: even its creators can’t explain how it works. Retrieved December 05, 2017, from https://www.technologyreview.com

[16] Bowers, Chet A. The cultural dimensions of educational computing: Understanding the non-neutrality of technology. Teachers College Press, 1988.

--

--