Source: Gerd Leonhard/Flickr Creative Commons

Artificial Intelligence and education: moving beyond the hype

Jelmer Evers
6 min readMay 23, 2018

We are living in an age of disruption and one of the drivers of that change is a development that many people call Artificial Intelligence. It is a very broad heading under which many people seem to pin their hopes and fears. AI has been portrayed as an existential threat to humanity, or as just an excel sheet on steroids. Some, like Ray Kurzweil, point towards the Singularity when we reach real Artificial Intelligence. Just like a gravitational singularity where gravitational tidal forces become infinite this Singularity will be an Event Horizon- a point of no return — a place and time which we can’t see beyond. A Big Bang of a new kind of intelligence.

The truth seems to be more nuanced. As we have argued in our book ‘Teaching in the Fourth Revolution: standing at the precipice’ (Doucet and Evers 2018) it does and will have a major impact on how we teach, live, work and learn, but it won’t be the end of the world. And therefore it will have an impact on education from the student’s point of view — as learners- as well as from the teachers point of view — as professionals. Both interacting with these new systems. Even if the major impact is only felt in a ‘jobs will disappear’ hype and scare that will allow corporations and the 1% to disrupt our societies even further.

What is AI?

So what is Artificial Intelligence? According to Wikipedia the definition of AI is: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

At the core of Artificial Intelligence are algorithms and data. In that sense AI is already here and has been here for a while. “Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future.” (wikipedia)

According to AI expert Toby Walsh we can identify three general types of AI. Strong AI, weak AI and Artificial General Intelligence:

· Weak AI One possible end point for AI is to build a machine that equals or exceeds our capabilities at a particular task requiring intelligence. This is sometimes called weak AI

· Beyond this is Strong AI. One of the louder and more eloquent critics of AI, the philosopher John Searle, came up with this concept. It is the idea that thinking machines will eventually be minds, or at least that they will have all the hallmarks of minds, such as consciousness. Other human traits that might be relevant to strong AI are self-awareness, sentience, emotion and morality

· A slightly less extreme end point than strong AI is Artificial General Intelligence, or AGI. This is the goal of building machines with the ability to work on any problem that humans can do, at or above the level of humans” (Walsh 2018)

Walsh again: ‘we do not need strong AI to get almost all the benefits of thinking machines. We just need machines that perform as well as humans. They don’t actually have to have minds. Indeed, if they do not have minds, we avoid a number of ethical problems — such as whether they have rights, or whether we are allowed to switch them off.’

Idiot Savant

Challenges with artificial intelligence lie in domains as knowledge, reasoning, problems solving, perception, learning and planning and even the ability to manipulate objects. AI and robots have been very good at doing very specific separate tasks- playing Go and Chess — but these are very specific and well defined domain problems. As Vivek Wadha — who calls AI excel-sheets on steroids- says of the most advanced AI, deep learning programmes:“before the networks battled, they received a lot of coaching. And, more important, their problems and outcomes were well defined.” Even with machine learning — the neural networks — will not move beyond that in the near future. At the moment AI is an idiot savant. It does something very very specific a lot better than humans.

Black Box

What neural networks do is called Deep Learning. The machines start learning themselves, machine learning. The problem is that the original human creators don’t really know how the programme did what it did. This raises many ethical issues. How do we know the conclusion is good? The right thing to do? How can a defendant, student or loan applicant object to such a decision reached? There is no appeal to such a verdict, because the decision itself is an enigma. We have to trust that the algorithm is just and doing the good thing.

And this is not speculation about the future, this is happening right now. In Weapons of Math Destruction (WMD), Cathy O’ Neal records many of these cases. The value added models to evaluate teachers in Washington were used like this, although not based on machine learning, the same arguments were still used: “Verdicts from WMDs land like dictates from the algorithmic gods. The model itself is a black box, its contents a fiercely guarded corporate secret.” In Europe the General Data Protection Regulation GDPR has started to regulate this. Someone affected by an algorithm needs to be able to appeal a decision and it can’t be based on a black box.

Bias

A related issue is bias inherent in the algorithms and data. Which can also be summarized as Garbage in/Garbage out. O’ Neil states: “models, despite their reputation for impartiality, reflect goals and ideology. (…) Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics.” Human bias in algorithms can be introduced by weighting of certain variables, which involves values and human judgement . The data itself can be faulty as well, the sampling can be too small, maybe there is over-representation, or the data is selected selectively.

A notorious example of bias was Microsoft’s experiment with chatbot Tay, based on machine learning, which after one day started spouting racist and misogynistic language on Twitter. Cathy O’ Neil writes “Another common feature of WMDs. They tend to punish the poor. This is, in part, because they are engineered to evaluate large numbers of people. They specialize in bulk, and they’re cheap. That’s part of their appeal.” These real-life examples pose serious challenges for the use of AI in a fair and equitable way. In a world that is increasingly unequal, AI seems poised to increase inequality.

Going forward

AI is extremely good at doing specific tasks, but we are still a long way from Artificial General Intelligence (AGI), if we ever reach that and that certainly goes for Strong AI. But the fact remains that AI does specific things really well and progress towards AGI will be made. Toby Walsh “Despite all these arguments against the technological singularity, I strongly believe we will arrive at thinking machines with human and even superhuman levels of intelligence on certain tasks. I see no fundamental reason why we cannot one day build machines that match and eventually exceed our intelligence. However, I am very doubtful that the route to superhuman levels of intelligence will be an easy one.”

It will probably take a very long time. AI is not going to replace teachers, and when it does in this day and age it will be a very poor proxy of a real education. Going forward we need to be aware of all the inherent limitations of what AI is and the very human challenges using algorithms and big data. They are human inventions and are embedded in political, economic and social contexts that come with the biases and ideologies. AI can definitely augment our profession and help us become better teachers, but as teachers and students we need to be aware of the context in which this change is playing out. We need to understand it and use it where it will be to the benefit of us all.

Doucet, A. et al., 2018. Teaching in the Fourth Industrial Revolution, Routledge.

O’Neil, C., 2016. Weapons of Math Destruction, Broadway Books.

Walsh, T., 2018. Machines that think, Prometheus Books.

--

--

Jelmer Evers

Teacher| Global #Teacherprize 15&16 nominee| Author of #flipthesystem #HetAlternatief| Learning, teaching, designing @UniCUtrecht @eduint