ARTIFICIAL INTELLIGENCE IN HIGHER EDUCATION series
Part 3- Moral Agency
By Emanuel Țundrea, Ph.D. in Software Engineering, Emanuel University of Oradea, 14th September, 2020
Initially published under proceedings from International Technology, Education and Development Conference at https://bit.ly/3fsD73D
The previous post on Data Privacy has brought into attention the opportunity to use AI capabilities for student profiling to help in their educational path or to assist them in learning complex subjects. This post raises the question of how an AI assistant should be equipped with the capability to make decisions and how to hold the creators accountable for its decisions.
This topic is of great interest in many other domains: sentencing in the courts to decide the likelihood of somebody reoffending based on their profile, caring for elders, developing self-driving vehicles, or even fully autonomous weapons which would be able to seek and destroy targets without any human intervention. WOW! These issues seem so close to us these days. What if you will be in an unavoidable car accident situation and your car computer prioritize your safety over a little child on the street? See the discussion about Mercedes’ future autonomous cars’ ethics.
Coming back to higher education: should a tutor tell a student that an intelligent system could analyze their academic performance data to decide if he/she is accepted in a program, or decide if he/she gets a scholarship, or not? Should the final decision on grading an essay, or a project be outsourced to a machine? Whenever there is a decision to be made or even a recommendation to be provided, there is moral responsibility involved. Therefore, the question is whether the educators should assign moral agency to robot so they are not making the decision anymore, but the machine does. The basis on which an assistant robot makes its decisions must be transparent and carefully defined.
If an AI tutor is equipped with the capability to make decisions, then it goes beyond just being an assistant to the student or the professor. Instead, it becomes an influencer. The morals behind the machine learning algorithmic bias have to be subject to ethical guidelines. Any design which involves AI has to be shaped with ethics in mind.
A possible way to define ethical AI tools is proposed by MIT labs through the “Moral Machine” project:
“providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.”
Such a possible strategy means calling for the whole “school community” — everyone interested in the welfare and the success of the education (administrators, teachers, staff members, students, parents, local business leaders, elected officials, …) to offer their perspective on moral decisions made by machine intelligence assistants. This means providing help to shape not just the curriculum recommendation, but the whole AI assistantship.
I would suggest that an even more successful strategy is underlined by a recent study conducted by the University of Notre Dame’s sociology department which shows that faith-based communities have a deeper commitment to minister ethically and also care for the environment, participate in politics, and address injustice in the workplace (including education). Therefore, as we now live in a multi-cultural environment we want to invite everyone to engage in making policies that assure the great benefits of the new technologies and avoid grave risks brought by delegating moral agency to a machine.
Food for thought:
- If you were to define morals and ethics for an AI-powered tutor, how would you do it? What would be the benchmark?
- Do you regularly challenge yourself to go out of your comfort zone by speaking with others that have different belief and moral systems?
- Would you be willing to serve and engage in policymaking to ensure that the built-in process of decision making of our AI assistants is transparent and ethical?