Reading Reflection 8

Bxteng
RAISE Seminar SP21
Published in
3 min readMay 29, 2021

In this paper, the authors argue that no member of the public is “consciously adopting AI” because it is not a product, but rather a technology embedded into more and more of our everyday lives. On almost every major website, every action we take is run through an AI for fraud detection. Over time, our text-to-speech software gets better because machine learning models make it more tailored to us. Many countries now use facial and fingerprint biometrics with neural networks to identify individuals in public. I agree with the paper that whether we like it or not, the world uses AI and we are subjected to its use.

The authors also make a clear distinction between expert trust and public trust in AI. I would like to make a comparison between this and the expert versus public trust in food safety. In the food science industry, there may be many professionals that genuinely believe a new food product made with certain chemicals is 100% safe for human consumption, even if the public does not believe so. Indeed, scientists and experts generally have opinions that are trustworthy if made in their domain of work. However, there have been too many times in history where the science was “wrong” or the scientists were mistaken, and the repercussions of such mistakes were unforgivable. Given that the consequences of implementing AI can be just as drastic as allowing certain food products to go out onto the market, I wholeheartedly agree with the paper that more regulations need to be imposed on the AI systems to convince other computer scientists and the public that they are safe to use.

There is a quote that “when one is trusting an abstract system (such as an institution), one is trusting in the system’s structures, i.e. “the rules and resources that govern its working and its continuous reproduction in the form of regular social practices.” This is a wonderful breakdown of the definition of institutional trust I did not know I would appreciate. I also find it very accurate. For example, when I say I have a complete feeling of trust towards the hospital near my house in Singapore, it is because I believe in the hospital’s safe practices that I have witnessed (signification), I believe in the doctors’ good decision making from which my neighbors and family have benefitted from (legitimation), and I know that the hospital is subject to strict Singapore government regulations that will ensure the safety of all who pass through (domination). I think it is important to understand what exactly makes up trust, and I am glad the paper had me think about this, because trust is an underlying feeling we either have or do not have towards something, but it is extremely powerful because it dictates all our behavior related to it.

Lastly, I do want to note that the authors of the paper argue that the public will not be able to interpret AI documentation well enough to make good decisions as to whether to employ it. I do imagine a future where children in schools learn computer science and algorithms with their math classes that have them understand the basics of machine learning and the implications of using certain algorithms. If this were the case, we would still need regulations governing AI, but the public would not be so ignorant as to why certain models were accepted and could decide for themselves if they approved of the models.

--

--