Photo by Frank Cone from Pexels

Adaptive learning, AI in teaching and explainable AI

Peter Thomas
9 min readSep 13, 2021

Is AI the new technology frontier for learning and teaching?

Written while founding director of HaileyburyX.

Education, or at least the education that happens in K12 schools, historically has been slow to adopt new technologies.

But perhaps that’s changing.

As reported in the last ACER Australian Report based on the OECD TALIS survey, the use of technology in the classroom increased in the five years to 2018 by an average of 16 percentage points.

And the pandemic has changed the adoption curve: the success of many schools in pivoting from classroom to Zoom meeting has been impressive.

But whether teaching during the pandemic or not, the reality is that adopting new technologies in teaching faces hurdles: technology can affect lesson flow and be disruptive; training and preparing lessons to include new technologies can be time-consuming; teachers have widely varying levels of experience and confidence; teaching situations are many and varied and so trying to establish common guidelines and a community of practice can be difficult. Effective support is often patchy: research by Growing Up in Australia reveals that 30% of teachers felt that technical support for educational technology was inadequate, as was training on integrating technology use into classroom instruction.

Set against this picture, it’s not surprising that some of the latest technological innovations have not significantly impacted the day-to-day business of learning and teaching.

Even though in the ACER research report more than 80 per cent of teachers agreed or strongly agreed that schools are open to innovation — developing new ideas for teaching and learning — schools are not primarily innovation environments.

They aren’t like businesses that are fighting the vertical climb in their markets against fast-moving competitors. The pressure to innovate doesn’t exist in the same way. And, the ‘fail early, move fast and break things’ approach isn’t going to work when schools’ primary mission is to guide students through a critical period in their lives.

Playing around with learning using new and emerging technology could be massively counterproductive.

Yet, one might observe, some of the most exciting innovations are precisely the ones that may prove the most beneficial for students.

And that’s true, especially around tools with some form of intelligence built-in.

To turn to a topic much discussed in educational circles — social media, for example, apps like TikTok — are based on algorithms that surface content into user’s feeds. These algorithms are highly effective in presenting things we want to see, like to see, or show us more of what we have seen. The ability to intelligently surface content is incredibly useful.

There has been some — and sometimes well-justified — panic about this, especially in the light of reports that TikTok’s algorithms may reinforce racial biases through ‘collaborative filtering’ that shows us only what we and others have seen before. If the popular creators on TikTok are White, creators of colour with smaller followings are less likely to surface, for example.

But some of this panic is misjudged and can lead to harmful generalisations. Just because TikTok uses algorithms in this way doesn’t mean that the whole algorithmic endeavour is evil, or that AI is bad, or that all machine learning, and the datasets it uses, are biased.

Some of the most exciting innovations — which we’ve written about before in our story Doscendo Discimus, a piece of speculative fiction about the near-future of AI and teaching — are around just this: algorithms, data, AI and machine learning.

If using Zoom, or an LMS, or VR or AR in teaching seems innovative — and there are many innovative uses of those technologies we see every day by talented and forward-looking teachers — what comes next as the educational AI flywheel spins up has the potential to be extraordinary.

Who it will be extraordinary for is students. Less so for teachers, who have tended to view it as an existential threat. But, like the pivot they did from classroom to Zoom, there’s no doubt that teachers will be the prime movers for using AI technologies in teaching.

But to do that, it’s a matter of being aware and involved.

We are waking up to the idea that we should regulate AI, just as privacy in the digital world is starting to be regulated. Whether that regulation is overly restrictive, just right or too lax becomes a matter of being involved in informing and creating it.

The same applies to AI in teaching: being involved, aware and participating is essential to reap the benefits.

The AI in learning and teaching endeavour comes in many flavours, but perhaps one of the most interesting is adaptive learning.

Adaptive learning aims to deliver learning experiences that meet the unique needs of an individual student — whether that’s through adaptive pathways (different students move through the material in different ways), adaptive feedback (that is tuned to what the student does and needs) or adaptive content (the content changes according to each student’s needs).

There are many companies now in this market, including Smart Sparrow, Knewton, Cognii, Century and others. Each takes a different approach to building adaptive learning technology. But they all aim to support, automate, replicate, and sometimes replace the kind of adaptivity that happens all day, every day in the classroom.

Smart Sparrow’s adaptive learning uses variable feedback to direct learners onto different paths to reinforce concepts.

What can be adapted?

As one of these companies, Smart Sparrow, say, there are essentially three types of adaptivity.

The first is content adaptivity, which gives intelligent feedback based on students’ responses to a question. For example, if a student responds to a question like “What does the term ESG stand for when talking about sustainability?” with only two (say, Environmental and Governance), the approach would be to provide hints, review materials or other scaffolding. The sequence of material doesn’t change.

The second is sequence adaptivity — changing the sequence in which materials are presented to students. AI technology continuously collects and analyses student data to change what a student sees next automatically. The aim isn’t necessarily to prescribe a pathway but provide alternatives. For example, if a student doesn’t complete a task, the sequence of material might be changed to include review questions or other materials.

The last is assessment adaptivity. Here, the complexity of assessment questions can be changed depending on the student’s response to a previous question. Questions can become more complex if the answers are accurate; less complex if the student struggles to get the correct answer.

Teachers, of course, do this all the time: providing intelligent feedback, constantly monitoring progress, scaffolding and extending. But AI offers the opportunity to do this more accurately, more quickly, at scale and at a hugely reduced cost.

How is it adapted?

Algorithmic adaptivity is where data and algorithms drive what the student sees. It’s based on what the algorithm, operating with both individual and aggregate data from other students, decides is most appropriate. The algorithm determines what a student knows and predicts what should be shown next to remain on an optimal learning path. Many Massive open online courses (MOOCs), intelligent tutoring systems (ITS), educational games, and learning management systems (LMSs) use this kind of approach.

To delve under the hood, one common (and older) approach to adaptivity uses a technique called Bayesian Knowledge Tracing (BKT), typically used for modelling skill development.

BKT estimates probabilities — that the student already knew a skill, that the student will learn a skill on the next practice opportunity, that the student will answer incorrectly despite knowing a skill and the probability that the student will respond correctly despite not knowing a skill.

BKT assumes that what students are learning are discrete skills and that the student either knows them, has learned them or has not learned them, so this approach is simple and limited.

So, new approaches to the knowledge tracing problem — what a student knows and what they are learning — are emerging. These Deep Knowledge Tracing (DKT) approaches are being used to build more complex knowledge tracing models such as Dynamic Key-Value Memory Networks (DKVMNs) and Sequential Key-Value Memory Networks (SKVMNs).

These aim to capture more of the nuances of learning. Education, social science, psychology, neuroscience and cognitive science attempt to understand and model how people learn. Motivation, reward, social identity, mood and a whole host of contextual factors are involved in learning. As these DKT models evolve, the promise is that they can capture these more complex dimensions of how students learn. Using large scale datasets such as those collected through MOOCs, they are showing promise in predicting student performance.

An illustration of how a student’s knowledge states are evolving for a sequence of 50 exercises using DKVMN and SKVMN. From the paper ‘Knowledge Tracing with Sequential Key-Value Memory Networks.’ https://dl.acm.org/doi/10.1145/3331184.3331195

The alarmist vision of AI-enabled teacher robots in the classroom isn’t close to reality.

Most companies operating in this space have realised that the most significant thing they can do is to involve educators in the products they design. Century, one of the companies in the adaptive learning space, has set up the Institute for Ethical Artificial Intelligence in Education. In partnership with Microsoft, Pearson and others, they aim to “enable all learners to benefit optimally from AI in education, whilst also being protected against the known risks this technology presents.”

They acknowledge that educators are vital to ensuring that learners can benefit optimally from Al whilst being protected against its risks. The framework proposed by the Institute aims to prevent learners from being exposed to unethically designed AI resources. It ensures educators ask questions about AI, such as what the technology is meant to achieve in terms of learner outcomes, how it promotes equity, doesn’t reinforce discrimination, and how privacy is protected.

The use of AI in education — whether by teachers or for teachers — is one of the new technology frontiers. Yet a 2020 McKinsey report takes the position that using AI in teaching is just too hard. They say

“integrating effective software that links to student-learning goals within the curriculum — and training teachers on how to adapt to it — is difficult. This underscores why we believe that technology in the classroom is not going to save much direct instructional time. “

If you take this view, AI takes the role of a teachers’ back office system — administration, preparation, evaluation — or intelligently automating various core administrative functions at a school level.

AI has much more potential than just automation — but there are challenges.

We saw the problems of collaborative filtering in algorithmically-driven social apps and the potential bias that it can create. And the constantly and quickly changing nature of AI techniques — as we saw in the discussion of adaptive learning — plus the inability of current AI-based systems to explain their reasoning transparently raises trust issues. The technology seems inherently not human-centred and is fast-moving and complex, and so seems untrustworthy. The natural impulse is to resist it.

Some of these trust issues are being addressed in what is now known as Explainable AI’, or XAI.

XAI aims to open up the black box of algorithmic decision-making without sacrificing accuracy. An example from healthcare would be an AI system (specifically, a machine learning system that makes predictions about disease) that not only predicts a patient’s likelihood of developing heart disease but being able to answer the question, “Why did you predict this person is likely to develop heart disease?”

With explainability comes transparency, and with transparency comes greater trust.

So to return to the top of this story, are we likely to see innovations in AI, and innovations like adaptive learning, play a bigger part in learning and teaching in schools?

The answer is yes.

One evidence point is the speed at which technology is being adopted in schools, plus teachers’ increasing openness to innovation.

Another is the emergence of professional development courses aimed at teachers on AI, such as the Coursera AI Education for Teachers course offered by Macquarie University and IBM. These courses are likely to help in giving teachers insights into AI and how it works.

And finally, the approach being taken by technology companies in addressing the core issues around trust, privacy, educator involvement, and human-centred design is another.

As a 2020 OECD working paper Trustworthy artificial intelligence (AI) in education: Promises and challenges says:

“There is no doubt that AI will become pervasive in education.”

--

--

Peter Thomas

Inaugural director of FORWARD at RMIT University | Strategic advisor, QV Systems | Global Education Strategist, Conversation Design Institute | CEO, THEORICA.