ARTIFICIAL INTELLIGENCE IN HIGHER EDUCATION series
Part 2- DATA PRIVACY CONCERNS
By Emanuel Țundrea, Ph.D. in Software Engineering, Emanuel University of Oradea, 13th August, 2020
Initially published under proceedings from International Technology, Education and Development Conference at https://bit.ly/3fsD73D
The 1950s American bank robber Willie Sutton was asked why he robbed banks. He said he robbed banks because “That’s where the money is.” Today, your data values a lot and threat follows value.
The cybersecurity specialists have long told us that it is “unsafe to turn on our computer”. If we paraphrase them, now we can assert that it is unsafe to browse the Internet, namely to ignore our privacy on the Internet.
How high are the stakes?
When media trusts like the NY Times or The Guardian are freaked out realizing how much of their data is harvested, we raise an eyebrow. Well, we are a bit desensitized reading their warnings as journalists tend to write at an alarming rate to get their readership. However, when governments push the panic button then we realize data privacy is a serious business.
After years of discussions over the new challenges for the protection of personal data brought by globalization and rapid technological developments, it is only 2 years ago when European Union implemented the GDPR law (25th of May, 2018). However, last month the EU Court of Justice already invalidated the decision on the adequacy of the protection provided by the EU-US Data Protection Shield, and negotiations restarted for a new agreement.
Moreover, Artificial Intelligence comes into play and all of a sudden the scale of data collection and profiling increased significantly.
AI ethics in Higher Education
This post is not to scare the reader, but its goal is to address the data privacy concerns and to look at what we can do with AI, specifically in education both now and also what is likely in the future.
AI ethics has been on the radar of many committees and professional bodies especially in the countries whose history is deeply influenced by the Judeo-Christian worldview. They have been thinking for several years on these topics and came out with standards, certification programs, and recommendations: IEEE², European Parliament³, or IFES⁴. They all underline the desire to unite and resource the AI architects, technologists, educators, and policymakers, with an ethical thinking having the human and societal well-being as the core value. Data privacy is at the heart of this effort.
As the first paragraphs stressed, lack of privacy is a major challenge today which arises under what would most of the time seem a good cause. Privacy is the right and authority of oneself to isolate in order to limit the influence others can have on their behavior.
In higher-education, profiling students may be a benefit to guide them in the process of their specialization speeding up their career path, but it can also hurt them by pushing their formation in a manipulative way in directions that do not have the student’s interest as a primary objective. Student profiling may lead to more customized service, but also may alter what has been traditionally recognized as a prerequisite for the exercise of human rights such as the freedom of expression and the freedom of choice.
“What AI brings to the table is the ability to gather, analyze, and combine vast quantities of data from different sources, thus increasing the information-gathering capabilities of social actors that use this technology. The potential impact of AI on privacy is immense, which is why it is imperative to raise awareness about these issues.”⁵
All the organizations committed to ethical standards raise the importance of data protection and data privacy.
These privacy concerns have been aggravated due to the AI capability to scale and automate everything at high-speed and also due to the fact that especially young people (the majority of students) are not even aware or interested to know how much data their devices share for free.⁶
In this new context, people become vulnerable to data exploitation by identification, tracking, and monitoring of their entire lives, not just their education path.
“This means that even if your personal data is anonymized once it becomes a part of a large data set, an AI engine can de-anonymize this data based on inferences from other devices. AI can utilize sophisticated machine learning algorithms to infer or predict sensitive information from non-sensitive forms of data. For instance, someone’s keyboard typing patterns can be utilized to deduce their emotional states such as nervousness, confidence, sadness, and anxiety. Even more alarming, a person’s political views, ethnic identity, sexual orientation, and even overall health can also be determined from data such as activity logs, location data, and similar metrics.”⁵
Therefore, higher-education institutions must lead the way against this dangerous trend, provide a safe and trusted environment for its stakeholders, and must educate and influence all social actors into an ethical use of AI in data science.
Food for thought:
- as an end-user who contributes to a greater part of what makes the Internet go around, are you giving at least a wee bit of your attention to how/whom you share your data?
- as a higher-education institution decision-maker, did you make sure you have clear policies and enough checks and balances in place for the way your community data is used?
- are you pro-active in educating your youth on how to avoid being vulnerable online?
- are you intentional to showcase your AI algorithm’s transparency toward your faculty and students?
It takes a lifetime to build a good reputation, but you can lose it in a minute (Will Rogers).
 Jonathan A. Obar, Anne Oeldorf-Hirsch, „The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services,” Information, Communication & Society, pp. 1–20, 18 Aug 2018.
 IEEE Standards Association, Ethically Aligned Design — A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Creative Commons, 2019.
 European Parliament, „EU Guidelines on Ethics in Artificial Intelligence: Context and Implementation”,2019.
 IFES Graduate Impact, „Artificial Intelligence — Ethics & Challenges,” Oxford, UK, 2018.
 M. Deane, „AI and the Future of Privacy,” 2018 September 5
 D. Curran, „Are you ready? Here is all the data Facebook and Google have on you,” The Guardian, 30 March 2018.