The potential of using AI in L&D, with Christopher Brinton

Robin Petterd
Learning while Working
15 min readFeb 10, 2019

--

This interview with Christopher Brinton from Zoomi is a great way to start this series on artificial intelligence and learning and development. The work that Zoomi is doing now is leading the application of AI in L&D. Zoomi works with your existing platforms and learning data and can apply over 250 different AI and machine learning techniques to do things such as analyse learning content and make predictions and recommendations.

This interview covers a lot of the key ways that AI technologies can be used to bring deep, powerful insights to your learning data and how this can then be used to drive personalisation on a whole new level.

Download the how artificial intelligence is changing the way L&D is working eBook

To go along with the podcast series on design thinking and L&D, we have released an eBook with all transcripts of the interviews. The eBook also gives a brief explanation of what AI is and an overview of how it is being used in L&D.

In this eBook your will learn:

  • Some of the jargon of behind the technologies e.g. what data scientists mean when they talk about ‘training a model’.
  • How AI is being used in L&D today to gain insights and automate learning.
  • Why you should be starting to look at using chatbots in your learning programs.
  • How you can get started with recommendation engines

Download the ebook

Subscribe using your favourite podcast player or RSS

Subscribe on iTunes | Subscribe on Google Podcasts | Subscribe on Android | Subscribe on Stitcher | Subscribe on Spotify | RSS

Useful links from the podcast

Robin: Let’s start with the big question. What do you think is the potential of AI technology in L&D?

Chris: I think that AI has the potential to revolutionise L&D. Now, I know that’s a really overstated word in a lot of different cases. What I mean by revolutionise in this case, is to really serve as a tool that can be used by L&D practitioners to make their content more effective, well driven, and well delivered to end users. It involves the analysis of content itself. So with AI technology we can diagnose potential issues that exist in the content itself.

Using AI to predict real and ideal behaviour from learners

Chris: As a first layer, it also involves making predictions; actually predicting in real-time what learners are doing and experiencing, versus what they could be doing to better optimise their experiences. It goes all the way to the prescriptive side of AI, which is then making these real-time recommendations. Content can be delivered better, but how do we better deliver the content to learners? That can be actioned upon either by the L&D practitioner themselves, or by the AI system.

But to make a long story short, I would say that AI has the potential to revolutionise L&D by providing a whole set of tools that can make the delivery of content much more effective for each individual learner.

Robin: It’s that sense of understanding the nature of the content and of modelling and understanding the topics that might be inside the content: the prediction bit and then the recommendation bit. That’s just brilliant, the prediction bit.

Essentially, in education there’s been a lot of using prediction ML to see which learners might be at risk. In another podcast I talked with Mike Sharkey, who was leading that.

What do you think that sort of prediction thing is about? Is it about giving back to the learners, or giving back to the team leader about whether or not someone’s at risk? Or whether or not they’re deviating from a path?

The four pillars of predictive learning and AI

Chris: That’s a great question. Before I answer that, let me clarify something. It is about getting these AI systems up to content models, as you said; like diagnosing and understanding the topics that exist in the content. But it’s also about the learners themselves, and modelling the learners. So you need content models, you need learner models, you also need social models for how people act. You need assessment models as well.

It’s the combination of these four pillars that will give you the ability to make these predictions. In terms of who the predictions would be intended for, I think there’s a target audience, but it can be any of the entities that you mentioned.

With what Zoomi is doing today, the main person to whom we are usually providing the predictions would be the owner of the course on the L&D side. So whether it’s a trainer, a supervisor, or an administrator of the course, we’re providing these predictions about particular learners that may not be meeting success criteria — but earlier than normal.

Breaking down the predictive process and learning

With L&D today, a lot of times you have to wait until the very end of the course before you get these scores that are provided, based on assessment outputs. Maybe in working with different SCORM, a lot of times they’ll have a single score. It’s usually reported at the end. A certain percentage for each learner. So each learner is translated to a single number, say 90% or 85%.

But it’s really hard to tell what that translates back to, and also more particularly, it’s hard to drill down and make really personalised recommendations, right?

So when I’m talking about making predictions, I’m saying, ‘Can we predict the eventual outcome, using the learner’s behaviour in combination with these content models, and how early can we do that? Then, when we provide it to the instructor, can we also provide a set of recommendations as to what they may want to do as a result of that?’

We also have a lot of other things we do, around the personalisation of learning and so forth. But speaking about AI for learning more generally, you could also predict back to the learner, where you could say, ‘You didn’t gain the knowledge out of this module that you should have. So we would encourage you to go visit this other module if you so choose.’

The predictions in that sense could be provided to the learner, and they could really be provided to any different entity. It’s just a question of what exact metric you’re looking at.

Robin: So this is also really exciting, because essentially you’re building this system that can help guide the learner, as well as the instructor and instructional designers in regards to which way they need to be going.

Working with models and algorithms to track and predict learning

Before we dive more into personalisation, and the recommendation aspect, I wouldn’t mind hearing you explain, in your language, what a model is. In the eBook that goes along with this, I’ll explain what a model is, but I would like to hear your thoughts on how you explain that to L&D people.

Chris: How to explain a model, that’s a really interesting question. It’s one that I don’t necessarily think about every day. What would be the best way to explain it?

Well, a model should boil down to a set of dimensions that you are tracking for a learner. Or more generally, the model obviously doesn’t have to be for a learner. Whatever object you’re trying to model, it should boil down to a set of dimensions. These dimensions would commonly distinguish learning language, which we would call features of the model, right?

These dimensions could be human specified. Or they could be machine driven and extracted using different techniques that exist in AI today. They could be latent or they could be hand-crafted human-specific features.

It’s having this dimension set and maybe there are 10 or 15 dimensions of your model. If you think about vectors, 10 or 15 entries that evolve over time, right? So as a person — or whatever you’re trying to model — as more data is generated, another component of that model is that it needs to update over time. Each portion of the model needs to actually change. It needs to have a dynamic component.

Robin: So essentially, a model has these sets of features. Then data is collected based on that model, and the machine learning system starts to understand the nature and patterns in that model. This is the way I think about it.

Chris: Yes, you’re going to have an algorithm that will then update the model based on the incoming data.

I guess if I could just also clarify: depending on who you ask, you could get a very different answer for what something would cost to do as a model. But I think any answer could be boiled down to what I just said was a set of features. It’s a time-varying aspect with data-driven updates, based upon an algorithm.

Robin: In our discussion just before the podcast, you were talking about a probability of whether or not this person here is likely to succeed based on a ratingout of a model.

Identifying behaviour patterns as a metric of learning performance

Chris: Right. That ties back into this specific AI model for learning. When we talk about models of learners specifically, in Zoomi’s case we look at behavioural dimensions. We’re looking at the behaviours that they exhibit as they’re going through content.

It involves how fast they’re playing through content, which segments they’re skipping back to. How material is being chunked up in their own mind and delivered to them. How they’re responding to it, and what their learning strategy is. These are other dimensions of that model. So the model gets very large, and high-dimensional, really fast.

There are some components of the model which need to be interpretable, meaning that a human could actually interpret them, play them back and understand them. That’s the kind of stuff that we provide to the instructional designers, to the L&D practitioners, through our systems. Some of them are not as interpretable, but they’re highly predictive.

That’s not to say that the interpretable features aren’t also highly predictive, which they are, but the non-interpretable ones can be even more highly predictive, and then we can use those to actually drive these predictions of what content to show next, for learners who are struggling.

Robin: Which is a really powerful thing, to be able to know how to help people. The remediation problem is sometimes challenging because it takes — and I say as an instructional designer and manager, someone who is helping people — it takes a long time to build up the knowledge to say, ‘If this person is struggling through this, they need this; they need B or C.’ So essentially, you’re in the spot where your models are helping people be guided towards what’s worked in the past with people who’ve had that same problem.

Why does learning need to be personalised?

Robin: Personalisation in learning, is it really that important? Because part of what we’re talking about is this notion that personalised interventions or personalised recommendations can happen to learners. I’m asking a contradictory question for a change, about whether or not it is important?

Chris: No, it’s a valid question to ask. Because it is, it is very important. Let me answer that first by talking about the process of actually learning and teaching, which we all know as we go through school: grammar school, middle school, high school, college, right? When you’re in class and you’re interacting with the instructor, or the professor, what is the job of the teacher? What is the job of whoever’s actually instructing the class? It’s to impart the knowledge onto the students. But in order to do that, they’re not delivering the material entirely the same, every time they stand up in front of a class.

They’re changing little bits and pieces of it depending upon what they see is happening in the class as a whole. If they see the class is struggling, they see a lot of really furrowed eyebrows, so to speak. Or they see a lot of confusion, or people don’t know how to answer the questions. They may step back, then, and really look and see where they can explain it better. If, on the other hand, people look bored, they might speed up the presentation, right?

Now, you have in this case a delivery dilemma; you have to do what’s generally best for the class as a whole. You can imagine drilling this down to individual learners if, say, they come to see you after class or in office hours. You’re going to start to provide more one-on-one remediated instruction. See, for this student Bob, who comes to me all the time because Bob just doesn’t really get what’s being presented in class. It could be because the way I teach in the class doesn’t tend to resonate well with him. So then I explain it in a different way to Bob when he comes to see me after class.

Or maybe Alice is coming in. Alice is saying that she understands all this material and she wants some more challenging stuff to work on. So maybe I find some more challenging material that helps to keep her engaged. All this is happening by a human, right? So personalised learning is something that’s actually been happening ever since education was born. It’s the whole way that we actually teach and interact with one another and try to share information and knowledge.

eLearning is often inflexible, and AI can override this

Chris: But that all gets thrown away when we go to eLearning. Because eLearning, in and of itself, is all about a one-size-fits-all delivery. The machine itself, if it’s a non-AI-based machine, is going to deliver the same content to every learner the same way, each time. It doesn’t learn, it doesn’t get better, it doesn’t learn how to diagnose individuals. So, AI for learning is just as important as having teachers who really know how to tailor content to each of their students.

Robin: Beautiful explanation Chris. I’ve heard people talk about adaptive learning being like that teacher who is guided by watching what people are doing.

Chris: It’s also not supposed to replace the teacher either, that’s the thing. There are a lot of people who say, ‘Well then, what do you need a teacher for?’ We’re not trying to replace teachers. We’re trying to give them additional material.

Imagine if that delivery of the content could be done in a way that was individualised to each learner. So, they would watch a lecture before class and then the eLearning system would personalise their class time by giving recommendations to the instructor, or the teacher, and say, ‘With Bob you should go over this material; with Alice you should go over this material. With whoever else you should give them this more advanced content.’

It’s a huge gain. It makes the instructors actually able to do their job better, because at the end of the day their job is to really provide that one-on-one instruction, right?

Robin: Yes. The other thing that’s also interesting that we’ve ended up talking about, is in the education situation — in the back of my mind I’m wondering where it’s applicable to use this type of system. Whether or not it’s something that’s usable in, say, a safety compliance module that’s part of an induction. Quite often, at Sprout Labs, we do programs that are competency based; they take people 18 months to become competent.

So there’s a high level of investment in the organisation if people don’t become competent. It becomes more critical to help people through. I’m wondering where you think are the powerful spots, and in what type of courses do you see these types of approaches working?

Chris: That’s a great question because there are different kinds of courses and different kinds of content. Some content is much more highly regulated than others. Especially with Zoomi today, our primary market is corporate training. We’re working with a lot of different corporations: Fortune 500 and Fortune 100 companies, and a lot of different sectors.

When we deliver on a course that’s highly regulated — maybe it’s an ethics and compliance course where the content is presented in this highly regulated way already and it has to be presented that way, because of legal issues. A lot of times, their deployment actually may look a little bit different because if we’re not given permission, we won’t go back and change any of the content. We will instead make recommendations to the L&D practitioners, and to the content owners: this is content that you may want to consider removing, because we predict from our content models and from our learner models together that removing it or modifying it would have a huge impact on your outcomes.

So we’ve run into situations like that where we can’t actually make the modifications ourselves, because of legal issues. But there are a lot of other cases as well where it’s absolutely permissible, and it ends up saving people a lot of time. So what we’ll do, from the combination of our content models and our learner models, we will figure out where the learner is struggling. We’ll figure out exactly what they were struggling on. Then we will serve up remediated content for them. This content is created by our system autonomously and in real time, and presented back to them.

We see gains in both cases. We’ve seen really tremendous improvements. Some of the metrics I can point to are available on our website. It’s www.zoomiinc.com. We’ve seen increases in engagement with our technology of more than 45%, increases in knowledge transfer of more than 50%, decreases in time to proficiency of more than 40%, and improved productivity and quality of service of more than 60%. We’ve really tied it back to outcomes in the workplace as well.

And we’ve seen these numbers in both cases. We provided these recommendations in the ethics compliance case, had them make the changes themselves, and then rerun the course. We’ve also seen those in cases where the system’s just running entirely on its own, making all these decisions, and those decisions are being trusted.

Robin: Cool. Those are some really interesting things, and nice examples of using the technology in highly regulated situations. I was thinking earlier: when you’re talking about the prediction systems, you’re actually talking about taking the data, doing the analytics bit, building the understanding automatically, getting the insights, and then getting the machine learning to help with the insights. Rather than getting the insights from the person doing the data crunching and trying to understand it. So, the first layer of what you’re talking about is automating insights.

You are automating the insights, and then there’s the customisation and personalisation side of it. It’s interesting because I just realised that some of what you’re talking about is what people expect a learning analytics system to do, and that it often doesn’t do.

Chris: That’s one of the key differences between just AI as a field in general, and AI for learning: the importance of interpretability, which is getting down to actually automating insights. Making these things interpretable to humans, so they can action things themselves if they want.

In other fields of AI it’s not necessarily as important. There are certain other fields where interpretability is very important but it’s not in a lot of cases. It’s not as important that the models be interpretable, it’s just about maximising the predictive capability of the models.

Robin: So Christopher, you’ve talked a couple of times about your platform, Zoomi. What is that platform and how can people find out more about it? To get a summary about that in the background of everything we’ve been talking about.

Chris: Absolutely. Zoomi is our tie-line AI for learning. We provide deep, personalised learning, we predict learning and performance, we optimise learning content and we link learning to business outcomes. The personalised learning that we do is real-time, fine-granular, fully-automated, and also autonomous. So, driving the point back that you could have a fully personalised solution without having to do any additional legwork on your own.

Not having to create any additional content: we create all this from your existing content and we repurpose it. Parcel it out and create new versions of it to present in different learning styles and learning strategies. You can learn more about Zoomi by going to zoomi.ai, or www.zoomiinc.com.

We have a couple of different offerings today. You could have this crawl-walk-run approach, where you basically go all the way from the diagnostic analytics that we offer — beyond just predictive analytics — all the way to the prescriptive analytics. We offer content topic analysis. We offer predictive learning analytics. We do analysis of your social learning network. We also do this ‘no-touch individualisation’, we call it, or NTI. No-touch really stands for the fact that you as the course owner don’t have to do any additional work to get this fully personalised solution.

Robin: The other thing that’s really unique is that it works alongside your existing learning management systems.

Chris: It does.

Robin: Not on other platforms. From the point of view of being slightly nontechnical, I don’t know how you do that Christopher. It’s really powerful.

Chris: I’m glad that you brought that up, because I forgot, and I’d probably get in trouble for not having remembered to bring that up. The fact is that it works alongside any of your existing learning management systems or your course management systems that you may have — whatever you’re working with today in your organisation. We’re actually going to integrate with that. We’re going to draw out the data that our analytics engines need, and provide a personalised experience in a way that is not going to cause your learners to see any difference in the experience.

There are two different schools of thought about whether you want people to see that personalisation occurring or not. There are active, healthy debates going on as to whether that’s constructive to have them know that the content that they’re seeing right now is personalised, or that it’s just original course material. We would do it either way as well, depending upon what the organisation wants.

Download the how artificial intelligence is changing the way L&D is working eBook

To go along with the podcast series on design thinking and L&D, we have released an eBook with all transcripts of the interviews. The eBook also gives a brief explanation of what AI is and an overview of how it is being used in L&D.

Download the ebook

--

--