ARTIFICIAL INTELLIGENCE IN HIGHER EDUCATIONseries
PART 4- MORAL DESKILLING
By Emanuel Țundrea, Ph.D. in Software Engineering, Emanuel University of Oradea, 19th October, 2020
Initially published under proceedings from International Technology, Education and Development Conference at https://bit.ly/3fsD73D
Machine automation generates phenomenal efficiency. Every industry has reached today greater productivity with much less labor than ever in human history. Two centuries ago over 60% of human labor was involved in producing food in the EU. Today, working in agriculture accounts for only about 4.2% of total employment in the EU. However, this is problematic because in case of a crisis (take the example of a sudden lockdown of entire cities due to Covid-19 pandemic) people will be thrown into situations they are not prepared for, without the resources and skills to survive. Now, AI is an amplifier and an accelerant, especially for the higher-ed institutions.
Moral deskilling signals a potential threat raised by the previous issue of assigning moral agency to a machine (see discussion in this post): the more stakeholders in higher education allow machines to make decisions on their behalf, the less they can think morally themselves. As AI machines are trained to make decisions, there is a risk that the community loses its moral acuity, its ability to think through what the decisions should be when new challenges emerge.
A comment made by C.S. Lewis back in 1943 in The Abolition of Man seems more relevant than ever: “what we call Man’s power over Nature turns out to be a power exercised by some men over other men with Nature as its instrument”. He goes on underlying the fact that the power of science presents one of the highest risks to be used against our fellows. This post is not about evil done through commission, but the one done through omission. This happens when we abdicate from our calling to control the machines and their operating algorithms, when we are no longer part of the checks and balances of the intelligent machine decisions or when we become dependent on them.
Here is an interesting example: London South Bank University is an institution that accepts students from increasingly diverse backgrounds. One of its research teams created an AI which triggers the likelihood of a student dropping out of the program. The data is gathered via “the virtual learning environment (VLE): the idea is that the university can tell not just how often they log onto the VLE, but where on campus they do it, what they look at and do while logged on, and can also track which other students they engage with. There is an additional element of text analysis to understand students’ individual psycho linguistic profiles and further analysis of their data gathered from what they make publicly available via their activity on online social networks such as Facebook.”
This raises several ethical issues about data privacy addressed in this post (what data is gathered, who defines the analytics, and who gets to see it? How comprehensive and intrusive should data collection be?), but also brings new ethical challenges: should a tutor tell a student that an intelligent system applied to his data predicts that he is at high risk of dropping out? How will the AI algorithm evolve if the human ownership interference diminishes or even gives up? Should the final decision about admission, grading, providing a scholarship, or giving a second chance to a student be made by a human as the highest authority? If these decisions are outsourced to a machine, will this generate a moral deskilling of the recruitment offices?
Therefore, this post argues that the artificial intelligent agents should never be fully autonomous. To avoid the risk of moral deskilling, the university boards should always not only instruct the AI agent but also supervise and intervene in every decision.
Moral deskilling is a risk also because delegating responsibility to the AI agents is comfortable for deciders. If the quality of AI agents is increasing, this may reach a point where people would feel more comfortable using them extensively, would trust more and feel less guilty not questioning them, would feel maybe like that’s the best way that we can serve the students.
Technically an airplane can be operated only by its complex autopiloting system. What will happen if over time pilots will lose skill at landing because they relied only on the AI systems to fly the airplane? Similarly, we intentionally need to resist the temptation to delegate the process of decision making to the AI machines and let them do only the boring part of our work. After all, we have the high-calling to equip and train young people, not objects, and they deserve our full consecration.
Food for thought:
- Have you experienced deskilling in your handy work since you adopted more technology in your home?
- How do you feel about the author’s statement that “delegating responsibility to the AI agents is comfortable for us”?
- What do we learn from the example of airline pilots who intentionally elected not to use the autopilot when take-off or landing?
- Do you agree that your work in higher-ed is a high calling and the students need you as the highest authority on every decision concerning their student life wellbeing and character formation?
- What actions will you take to ensure that the technologies equipped with AI in your university or organization are shaped with the human ownership principles at their core, so we avoid moral deskilling?