Artificial Intelligence

Teresa Nanjala Lubano
ND Notes
Published in
8 min readJun 22, 2023

--

Its Power and Risks in Design Practice

Image: Pexels/ Peter Knight

Introduction

Intelligence requires mastering six key traits: Reasoning, understanding knowledge, ability to plan, ability to learn, communicate effectively and integration of skills (Russell, Norvig, 2020). As the name implies, artificial intelligence (AI) is intelligence that is applied in an autonomous machine or within a digital application rather than displayed in a real human. By definition, the term AI is used to describe the process that requires human cognitive intelligence to be carried out by machines as part of a product or production system (Szczepański, 2019; as cited in Blake et al. 2021).

In recent years, there has been increasing application of AI tools to a variety of disciplines including the engineering domains (Blake, et al. 2021). Platforms that are harnessing the power of design, human infrastructure and AI like MidJourney and OpenAI® — DALL-E 2 (both are generative art models) and such have gained popularity and are raising debate within the creative communities and beyond. Other domain applications include content recommendations, chatbots, image recognition, machine translation, fraud detection, medical diagnosis, and autonomous vehicles (Benjamins et al, 2019). But in the wake of these developments, there have also been serious risks of applied AI. Through desktop research, this paper attempts to shine a light on the different aspects of AI in general, cite examples of AI in design practice and engineering domains and highlight some of the ethical concerns of AI. The author concludes by proposing a ‘Responsible AI by Design’ paradigm as a path forward for future applied AI.

Aspects of AI

AI can be classified into analytical, human-inspired and humanized AI based on the type of intelligence it exhibits (Table 1 demonstrates the classifications). Organised in stages as follows, Stage 1: Artificial Narrow Intelligence (ANI) includes weak AI for example speech recognition, facial recognition and the route optimizer on Google Maps (Blake, 2021). They are normally programmed to carry out one task. Stage 2: Artificial General Intelligence (AGI) have a higher capability than ANI and are able to reason, plan and solve problems in a broader context (Kalpan, Haenlein, 2019). It has a bigger and broader capacity to mimic human understanding and intelligence and remain similar to human intelligence (Ghanchi, 2020). Positioned to be fully attained by 2040–2050, some examples of AGI in design include conversational chatbots for marketing and customer service and smart assistants like Zuri by Safaricom® and Alexa by Amazon®. Finally, Stage 3 is Artificial Super Intelligence (ASI) also called High-Level Machine Intelligence (HLMI) are designed to have a level of intelligence that is much higher than humans’ cognitive performance in almost all domains (Bostrom, 2014). We can conclusively predict that in a few years to come, ASIs will surpass human cognitive intelligence and even be ‘tutored’ to match if not surpass human emotional, social intelligence and artistic creativity. Already, generative art AI models are posing a threat to artistic creativity, a coveted human quality linked to artistic and creative aptitude.

Ray Kurzweil is a pioneer in AI and predicted that the advancements in AI, genetics and robotics will eventually result in the rise of ASIs (Blake, 2021). It is this level of AI that is of grave concern. Tech experts such as Elon Musk warned against its evolution with others predicting that the advancements of AI could spell the end of the human race. (What If, 2019)

Table 1. AI classification on the type of intelligence. (Blake, Haenlein 2019; as cited in Blake, et al, 2021)
Images: Examples of AGIs — generative art by DALL-E and MidJourney web banner, Safaricom’s virtual assistant, Zuri and Amazon’s virtual assistant, Alexa. (2023)

Concerns in AI

Despite the fact that AI will make life much easier, increase efficiencies, and new employment categories, experts are calling for the need to address its ethical, security and privacy concerns. Some of the ethical reasons being raised in AI are related to the undesired consequences of AI. Benjamins et al (2019) mentions broadly;

That there are unfair biases leading to discrimination (bias, discrimination, predictive parity) (O’Neill 2016), or the lack of explanations of the results of AI systems (interpretability of algorithmic conclusions (explainability, black box problem) (Samek, Wiegand, Müller 2017, Guidotti et al 2018, Pedreschi et al 2018) transparency of data used (Gross-Brown 2015), impact on jobs (Manyika and Sneader 2018) liability questions (Kingston 2016), and malicious use of the technology (Pistono and Yampolskiy, 2016).

Closer home, all Kenyan users of smart mobile phones are already interacting with AI on a daily basis. Some of whom are unaware of the mobile lending platforms that rely on machine learning algorithms analyzing transactions made by users and borrowers to determine credit demand and their risk profiles (Sunday, 2019). The inherent concerns are, one, who has access to this sensitive information and two, what benefits are the app providers enjoying as they hold this information. This is a classic example of security risks posed by embedded fintech-AI apps in mobile phones. In line with this, in a recent article on A gendered perspective on use of artificial intelligence in African fintech ecosystem by Ahmed (2021), he suggests there exists persistent systemic barriers that impact women’s financial inclusion (Barajas et al., 2020) and raises questions regarding the manner in which AI can exacerbate or alleviate current dynamics in inequitable (socio-cultural norms , access to economic opportunities, and the gendered division of labour)and unfair ecosystems.

On 21st September 2022, visual media company Getty Images® issued a statement informing their creators that they will cease to accept all submissions created using AI generative models and prior submissions utilizing such models will be removed (Getty Images, 2022). They raised concerns over the AI-generated art in respect to the copyright of outputs from these models’.

Credits: Email sent to author from Getty Images

These concerns about AI and specifically the infringement of copyrights and intellectual property has recently gained momentum sparking debates about AI taking over creative work and depriving the creators of an income through the purchasing of their authentic creations/ obtaining gainful employment, for example. The community is divided on this issue. For example, on one hand, proponents of these new AI tools suggest that they can be used as a reference for inspiration, strategy and content development, for the development of easy-to-use cheap creative tools and increased job opportunities for creatives. But on the other hand, critics have also cited serious social, ethical, security and privacy risks.

OpenAI®’s chatbot CHATGPT and ARISTO (both are artificial intelligence text generators AITGs) have been particularly concerning tools for educational institutions, with lecturers and teachers alleging that students are utilizing the technology to cheat on assignments and exams. However, there are professionals and academicians who believe that technologies like CHATGPT should be accepted in education and used to supplement rather than replace learning. (y Cano et al, 2023).

Several scholars (Gloor, Sotala, 2017; Barrett, A., Baum, S. 2016; Bostrom 2014) have also discussed the catastrophic risks of the “technological singularity” in AI applications and possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction.

An ethical AI exemplar

Telefónica, a Spanish multinational telecommunications company, is however taking responsibility by championing ethical AI. The company published its “Principles of AI” (Telefonica 2018) advocating for fair, transparent and explainable AI, human-centric AI and privacy and security by design (Benjamins et al. 2019). More importantly, these Principles are by extension applicable also when working with their partners and third party suppliers.

Conclusion

As evidenced, AI’s limitless potential cannot be understated. With the speed of technological developments, it’s engineered smartness accelerating growth and transforming industries will only increase. However, given the myriad unaddressed concerns about AGI and the catastrophic risks posed by artificial super intelligence (ASI), ‘which is expected to reach full adaptation by 2080 (Blake et al, 2021)’, responsible AI is necessary. Borrowing a leaf from Telefónica, business, institutions and governments should consider and deploy prudent, collaborative AI policies and regulatory frameworks (and by extension their third parties) to mitigate the risks posed by AI. This is not only for facilitating fairness, data protection, job opportunities, gendered parity, security, privacy and ethical user well-being and so on, but also for the preservation of our race from potential extinction.

Notes:

This paper was first written on the 30th of October 2022, by Teresa Lubano for her Masters of Art in Design coursework, Contemporary Design Issues, at the University of Nairobi, Department of Art and Design.

Edits were last made on June 23, 2023.

References

Ahmed, S. (2021). A Gender perspective on the use of Artificial Intelligence in the African FinTech Ecosystem: Case studies from South Africa, Kenya, Nigeria, and Ghana (Paper). International Telecommunications Society (ITS) 23rd Biennial Conference — Digital societies and industrial transformations: Policies, markets, and technologies in a post-Covid world, June 21–23, 2021

Barrett, A., Baum, S. (2016 May). A model of pathways to artificial superintelligence catastrophe for risk and decision analysis. Journal of Experimental and Theoretical Artificial Intelligence, 29 (7). https://doi.org/10.1080/0952813X.2016.1186228

Benjamins, R., Barbado, A., Daniel, S. (2019 September). Responsible AI by Design in Practice (paper). Proceedings of the Human-Centered AI: Trustworthiness of AI Models & Data (HAI) track at AAAI. Fall Symposium, DC, November 7–9, 2019. Retrieved on October 20, 2022 from https://arxiv.org/pdf/1909.12838.pdf

Blake, R. W., Mathew, R., George, A., Papakostas, N. (2021). Impact of Artificial Intelligence on Engineering: Past, Present and Future. 54th CIRP Conference on Manufacturing Systems. Procedia CRIP 104 (2021) 1728–1733. Retrieved form https://www.sciencedirect.com/

Bostrom N. Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press; 2014.

Getty Images. (2022, September 2021). AI generated content (Newsletter). Retrieved (online version)https://app.engage.gettyimages.com/e/es?s=1591793372&e=11409527&elqTrackId=178baed6532740e59ffea78497faafe5&elq=db54c3ee9fdd4840bb11b74565c6e5c7&elqaid=51367&elqat=1&elqcst=272&elqcsid=25098

Ghanchi, J. (2020 October 9). What is artificial superintelligence? How is it different from artificial general intelligence? Retrieved from https://itchronicles.com/artificial-intelligence/

Kaplan A, Haenlein M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horiz 2019;62: pp. 15–25.

Russell S, Norvig P. Artificial Intelligence: A modern approach. 4th ed. Prentice Hall; 2020.

Szczepański M. (2019). Briefing EPRS | Economic impacts of artificial intelligence (AI).

What if. (2019 September 19). What if we created Superintelligence. Youtube. Retrieved from https://www.youtube.com/watch?v=qq5Mn2N0bQI&t=2s

Gloor, L., Sotala, K. (2017). Superintelligence as a cause or cure for risks of astronomical suffering. International Journal for Computing and Informatics. Informatica 41, pp. 389–400

Sunday, F. (2019, April 3). Artificial intelligence can sort our Africa’s problems (article). Standard Media. https://www.standardmedia.co.ke/business/article/2001322174/artificial-intelligence-can-sort-out-most-of-africa-s-problems

y Cano, Y.M., Venuti, F., Martinez, R.H. (2023, February 1). ChatGPT and AI Text Generators: Should Academia Adapt or Resist? Harvard Business Publishing. https://hbsp.harvard.edu/inspiring-minds/chatgpt-and-ai-text-generators-should-academia-adapt-or-resist

--

--

Teresa Nanjala Lubano
ND Notes

Founder, Creative Director Nanjala Design & Shop Nanjala™ My interests lie at the intersection of design, nature, tech & sustainability. teresa.lubano@gmail.com