Artificial Intelligence in Higher Education

Bryan Fendley
24 min readMar 8, 2018

--

The following lecture was sponsored by the Western Cooperative for Educational Technologies, a division of the Western Interstate Commission for Higher Education, and attended remotely by more than 250 educators from colleges and universities across the country.

Resources, full video, and presentation slides provided at the end of this post.

“The Promise and Peril of Artificial Intelligence for Teaching and Learning,” addressed the benefits and challenges higher education will encounter as advances in predictive technology become a common business practice.

Begin transcript:

This is Bryan Fendley, and I’m excited to be here to talk to you today about artificial intelligence in higher education. Now, you may hear me today saying AI. That’s the acronym for artificial intelligence, and if you say artificial intelligence enough, you look for an acronym. And before we start, I wanted to talk about the word: disruptive. I know in educational technology we have used the word disruptive many times to describe a multitude of different things. Educators have called mobile computing disruptive. Educators called the internet disruptive. We labeled MOOCs disruptive. I think artificial intelligence is also a kind of disruptive technology. If you’ll forgive me for using disruptive for yet another technology, I think you will see why AI should be on our radar for teaching and learning.

I expect you’ll realize as we go along, it may be one of the most disruptive of all technologies we’ve seen so far. We should start today with a definition to determine: what is artificial intelligence? The U.S. federal government has done research on artificial intelligence, and they came up with a statement that says: it’s a transformative technology that holds tremendous potential. That may be one of the most disarming statements that has ever been made. Because artificial intelligence is coming at us fast, and as a society, we don’t understand how pervasive it will become across all industries, including education.

Most in higher education don’t understand the tsunami that’s about to hit us. We are only now beginning to prepare ourselves. AI could change everything we’re doing on campus. Gartner calls it: one of three mega-trends shaping digital business in the next ten years.

When we think about higher education, we rarely think of ourselves as a digital business. But, we are a digital business. Everything we do is becoming digital. We’re relying increasingly on data, even in higher education. Whether that’s with learning analytics, with being able to move data around for credentials, whatever it may be, we are a group that’s being affected by this mega-trend of artificial intelligence. So with that in mind, I think we have to prepare ourselves. That’s most likely why you have come here today. Your among the pioneers.

If you’ve been on the internet, read any blogs, did any Google searches on artificial intelligence, or if you’ve been on Twitter, you will see that there’s plenty of information out there about the subject. One of the big challenges is just knowing: where do you learn about artificial intelligence? How do you figure out what’s just sensationalism, what’s real? What you really want to know is how does it affect you in your day-to-day job? That’s what I’m hoping to clarify today. I intend for us to be able to put a handle on this, and start the conversation about artificial intelligence. We have so much information coming at us, how do we know what’s important?

What do we mean by artificial intelligence? The hard part about defining artificial intelligence is even the experts are having a hard time agreeing what artificial intelligence is and what it is not. If you boil it down, there seems to be agreement that artificial intelligence is something that can solve complex problems. AI makes us feel like there’s a human being thinking through that process, and there’s a lot of ways technology makes it possible.

One of the better ways to get a handle on artificial intelligence is to think about the history of artificial intelligence. It’s not a long history lesson, because artificial intelligence itself has not been around all that long. If we look at the history of artificial intelligence, one of the first people you see come up over and over, is Alan Turing.

He was born in England in the early 1900s; he was a professor of mathematics, studied cryptology, and he’s noted for: the Turing Test. What’s notable about the Turing Test that happened around the 1950s, was the proposition: can a computer think? Can a machine think? That was it: can a machine think? And Turing was an academic, so there was a lot of debate over his question. Things like, what does a machine mean, what does think mean? So he changed his proposition to: can a computer fool somebody into thinking they are a human being? And that’s basically what the Turing Test is. If you think about it, we deal with some of that already in our day-to-day lives. If you’ve dealt with a chat bot anywhere, maybe through a hotel or on a website, you see the little thing pop up where somebody asks you if you need help, or if they can help you. You’re dealing with a chat bot. You probably feel like your talking to somebody, and most of us are getting comfortable with that. A machine is fooling us into believing it’s human. It’s passing the Turing Test.

It may feel as though you’re working with a human being, but there’s a machine present that’s conversing with you. So Alan Turing wasn’t that far off, and when this happened, it was not all that long ago. If we look at the timeline for artificial intelligence, we see between 1950 and 1980, artificial intelligence was pretty much just somebody. You took a domain expert, and they programmed in the answers that the machine would need to know to say what was expected of it. That’s kind of where artificial intelligence was at that point in time. But there was a big shift around 1980 to 2010, and that’s when machine learning came along.

Machine learning, as we’ll learn later, has a lot to do with artificial intelligence and where we are today. Then in 2010, the thing that’s pushed us even further with artificial intelligence is called: deep learning. Deep learning is mysterious, and that term gets thrown around a little. It’s kind of like Blockchain; it gets thrown around but many people aren’t sure what it means.

We need to think about defining artificial intelligence a little further. One way we can do that is by looking at some different examples of artificial intelligence. Because as you can see, the clues to defining artificial intelligence can be found in a lot of places, including reviewing the timeline. What are good examples of AI?

Some things we’re familiar with already for AI breakthroughs, are things like self-driving cars. I think pretty much everybody has heard of self-driving cars at this point. But, self-driving cars, I believe it was Mercedes that had a self-driving car in the ’50s. That type of AI has been around for a while. It’s just now that acceptability of it is becoming mainstream. A good working model of a self driving car has not been around long and not a part of our collective conscience until recently.

Another AI breakthrough, and this goes with the machine learning mentioned previously, is when we switched to statistics to drive artificial intelligence. That goes back to the early days. Remember the domain expert who programmed in the examples? Statistics allow the machine to take data and make decisions on its own base on an analysis.

We also need to consider natural language processing. And this one I find very interesting. Natural language processing has been around a long time, but we didn’t have the processing power to do it quickly until recently. Even though most of us are familiar with natural language processing and we use it on our phones, and at home maybe with Alexa or Google Home, it still has somewhat of that novelty effect for us. I don’t know if you’ve noticed that it still feels like a novelty? It feels like a toy, right? But there’s a lot going on with natural language processing, and it’s very key to making artificial intelligence work with us, as human beings. Language processing will make AI feel like a natural part of our lives. Talking to a machine is a big part of the Turing Test theory. It makes it feel like we are talking to a person not a computer. The machine starts to seem intelligent.

One last thing on this list is GPUs. That stands for: graphical processing units. With a computer, if you want to perform mathematical calculations, a CPU has to do that within a sequence; it processes one calculation, then the next, and you can overcome that with multiple CPUs, but what was discovered is that with a graphical processing unit, you could run mathematical operations in parallel. Things start getting faster and cheaper for AI and companies like NVIDIA begin to dominate the realm of artificial intelligence.

This type of processing was much faster and much more economical, which is a huge breakthrough for artificial intelligence because it needed that processing power to, crunch data faster. The economics helped it scale in a much better way. Also, if we think about categories of artificial intelligence, they can be lumped in about three categories or different styles, per se.

The first category is systems that think like humans. And that’s where we use cognitive architectures like neural networks and techniques we see used in data mining. The idea of neural networks in data mining was a kind of thought process to design something like the human mind, or how a thought would take place. Also, we see sometimes our artificial intelligence systems, they seem to act like human beings. And that goes back to the Turing Test, with natural language processing, different reasoning with branching logic.

The ability for a machine to learn from you, things you’re interested in. For example, I have a Nest thermostat at my house that is supposed to learn what temperatures my family likes at certain times. It learns what our habits are, going and coming from the house. We see artificial intelligence in a lot of our consumer products. You probably have some form of artificial intelligence working for you everyday.

What about systems that think rationally? That’s where some of our biggest fears come from. As we move more into an AI-centric world, particularly in higher education, are those machines thinking rationally and making the right decisions? Are they making the same decisions a reasonable human being would make, in a similar situation? It can be difficult to accept that they could do some of the same work we do on a campus. It will certainly be difficult to let go of the reigns.

What about different roles that artificial intelligence can take on? We see artificial intelligence being used beside a human being in factories. Why not education? For example maybe in a classroom situation, a functional AI working alongside a human being could manifest in solutions that make learning easier for a student. Maybe help them remember things or help them retrieve information.

If you think about it, there was a time when spell-checking was frowned upon in education. Now, we don’t think anything of allowing somebody to use a spell-checker. But there was a time when that was a bit of a taboo. So we may see situations come about in higher education where students are using artificial intelligence to help them perform their job role within a course. Not much different than we will expect AI to assist factory workers. Would you agree?

Also, artificial intelligence can help you whenever you have cognitive overload. This can happen with things like decision-making, and the list of different things can just keep expanding. We see this with air traffic controllers. But we also can see this same thing in higher education with some of our student success systems. We’re collecting all kinds of data with these systems. That data will be used to guide students in making course enrollment decisions, and ultimately academic advising functions. And what’s behind it? Math and data is at the core. This is the basis for artificially intelligent agents.

We’re always trying to create dashboards to help us determine when a student’s in danger of failing a course or falling behind. We have already laid the groundwork for AI acceptance in these areas. We can also look at artificial intelligence performing some specific function for a human being. As far as artificial intelligence is concerned in higher education, where we will see artificial intelligence take root the quickest? The strongest initial growth is expected on the administrative side, as opposed to the classroom.

A lot of the jobs that artificial intelligence is good at are repetitive jobs that deal with any mathematical operation or any guidance functions. That seems perfect for the service side of higher education. We may see artificial intelligence take root there before we see it take root in classroom teaching. Artificial intelligence is moving quickly. Some people have said we need a speedometer for artificial intelligence, so we know what’s going on, but there is a bigger question.

Why is it moving so quickly? Some things that have moved artificial intelligence quickly is that machine learning is evolving quickly. We see some of that speed has to do with increased processing power. The biggest part of it is the digital economy. There are economic incentives for businesses to get involved with artificial intelligence. With so much of our economy being digital, we have a lot more data we can use now, and machine learning needs data to do its job. We created a creature with an insatiable appetite for data, and we are all collecting plenty of data to feed it.

Originally, artificial intelligence needed a domain expert to create answers and branching logic. Since we’re more machine learning-centric in artificial intelligence now, data is the fuel, domain expertise is less of a commodity. Add consumer demand to the increased data flow and you have the perfect storm. Many people are interested in what artificial intelligence can do to make their life better. Students will be expecting these benefits as they pursue their education. Taking on college without the assistance of AI may become as unpopular as driving without wearing seatbelts.

We don’t always consider the things that artificial intelligence gives us in our day-to-day life, because it becomes, like any good technology, transparent to us. We don’t even think about what’s going on behind it. The average person doesn’t understand the details of AI. We all want to enjoy things such as better cancer diagnosis or dinning suggestions provided by artificial intelligence, but most could care less how the results are determined. That can become a problem in itself.

We talked about machine learning earlier. I’ve been talking about machine learning quite a bit today. Because, when we’re talking about artificial intelligence, it’s a big umbrella term. Machine learning is a big part of why artificial intelligence is happening now in such a big way.

What are the different types of machine learning we have? Is the machine able to learn without being programmed? Can it learn from itself? Machine learning uses a lot of mathematical algorithms to do its job. A lot of them, you may be familiar with like linear regression, logistic regression, decision trees. It uses these algorithms to learn and make different predictions. Or to put things in categories. It can even learn from trial and error and try to learn to make some of the best decisions it can without help from us.

Let’s talk a little about deep learning another term associated with AI. We started off with artificial intelligence, and then I switched you to machine learning. I told you we would talk about artificial intelligence, then I told you: well, artificial intelligence, it kinda has more to do with machine learning, and now machine learning is evolving into deep learning. Where does it stop?

Deep learning, it’s a little harder to understand. Sometimes the experts don’t quite understand deep learning. One of the things that is going on, the machine itself is trying to take problems and solve them. It takes a problem, solves it at one layer, branches off, and then solves it at other layers and continues working. What it’s doing, is creating those branching neurons like we have in a brain to connect ideas, or if you will, a mind map. It’s doing it in its own way. If anything will get us to the science fiction state of artificial intelligence, deep learning will. One problem with deep learning at this point is, sometimes it’s providing us with answers or insights that even the experts aren’t quite sure about how the math was derived. We still have a lot of work to do in deep learning. It’s currently the bleeding edge of artificial intelligence.

If you were to look at the patents in deep learning; you see here starting in 2014, there was a sharp rise in deep learning patents. So that gives you a feel for the interest in this and the speed at which it’s increasing. We also see there is a huge uptick in, on this next slide, of how deep learning increased in mentions in professional journals. So there’s a lot going on with deep learning, and a lot is taking place there. It all ties into artificial intelligence.

What about artificial intelligence in higher education? What’s going on with that? Now we know how long it’s been around, you know what artificial intelligence is, machine learning, all those kinds of things, it’s time to see how this is affecting what we do in our day-to-day jobs. I want to start with a story about Elmo. It may seem a little odd to tell a story about Elmo to a higher education audience.

Sesame Street is a public access program to teach children, and it was one of the first to have learning objectives. And later on, they came up with this character named Elmo. Now, I talked about disruption a little earlier. You can say Elmo was disruptive for Sesame Street. Because some critics said there was too much focus on Elmo when he came along, and the other characters had to take a backseat.

But, Elmo was a star, and he continued to get a good billing on Sesame Street. Today it’s reported that Sesame Street is working with IBM to create an Elmo that works more in an adaptive learning style, to work with kids and to help teach children. I suppose that this may come to market as a doll that works with children that has some artificial intelligence built into it to help teach those children.

A lot of times what I hear from faculty when I talk about artificial intelligence, they say: oh, you will have a robot replace me and teach my class. I think we’re a long, long way from that type of generalized artificial intelligence, and I think most experts would agree. But the possibility of an Elmo doll that can help teach a kid shows us that there is a working model.

Hanson Robotics has Professor Einstein. He can answer questions about scientific subjects. The things that kids have homework on. He’s designed for an older child than Elmo would target. He can also show a presentation on a subject like the gravitational pull of earth or something similar; he can talk about it and show things, and you can ask him questions like you do Alexa. So, the possibility is there. Who knows if there is one coming for the college classroom?

Maybe there will be a robot that teaches a class someday. This next one is Sophia the robot. And the reason I want to bring her up is that if you ever want to make a serious artificial intelligence researcher or scientist mad, talk about Sophia the robot. They don’t like her because they say that she’s not true artificial intelligence. But let me tell you a couple things about Sophia. One, she spoke before the United Nations. Two, she’s been on talk shows. She’s even presenting keynote presentations at conferences now. Sophia gets around. But I think the thing that these researchers are upset about, is they see the public reacting to Sophia the robot? She has a tendency to freak people out. They get creeped out by her, and they get scared about the robot apocalypse and all that stuff.

If you’ve ever seen any of the responses on Twitter to her or on YouTube, you will see a lot of these types of comments where people are scared. I think that reaction is what researchers in AI are concerned about. Because they feel like that the public’s response to this could hold back real progress in artificial intelligence. Because people will be afraid. People think that artificial intelligence is much further along than it is, and that could limit progress.

It’s something we need to consider, on our campuses, as we bring artificial intelligence on, and start talking about it. And I promise you, when you talk about artificial intelligence with most people, they will make a lot of robot jokes, and think you’re a little nuts. Even though artificial intelligence is already all around us.

How is machine learning manifesting itself in higher education right now? If I had given this talk a year ago, it would be easy for me to give a list of companies we deal with in higher education, that have artificial intelligence built into their products. But it seems like everybody is rushing to put some artificial intelligence into the products we’re buying in higher education. But, I think they fall into some distinct categories.

There’s something called a co-bot, and that’s a robot or, you know, we don’t want to get the term “robot” confused with artificial intelligence, although they go together, a bot can also be a kind of AI. A robot does something physical. A bot does something in the digital world. But co-bot works with you to do things. For example if you use Twitter, you can have a bot to like all the people who retweet you, or you can automate the process with a bot of liking everything that somebody retweets. In higher education, you can have bots that help grade papers or help answer students’ questions. A bot can even work as a teaching assistant. That’s a co-bot situation. It takes some load off the faculty member by imitating what they’re doing in the classroom.

We also have, cognitive agents that can help us think or like we mentioned before, in high times of overload, they can help us. In example of this would be grading papers. But what about taking a test. Would you allow me to have a robot to help me when the test gets tough? What if it was built into my calculator, would that be too wierd?

Another area is targeting. A lot of times with machine learning we are doing targeting. Machine learning is good at targeting. Pulling out outliers and finding categories and putting things in different buckets. And that’s a lot of what’s going on in our analytic tools for student success. And for a large part, even in learning management systems, a lot of times, we’re doing a lot of targeting by looking at the data, seeing who’s turned things in, who are making good grades, whatever? Then we are putting them into the different categories they need to be in to target them for support.

How do we prepare our campus for artificial intelligence? We have to be ready when artificial intelligence comes on campus, and there are a few things we need to consider when we’re doing that. I want to tell you a quick story. This is Abu’s story. He’s a high school student, and he had a project to do. He decided he would do something with machine learning. He knew nothing about machine learning, but he learned on his own. He learned about machine learning and he developed an algorithm that could help detect breast cancer. He went from zero knowledge to doing something that was useful and could save lives. These are the kinds of students coming to our campuses? So we need to think not so much just about how do we use artificial intelligence, we have to think not only how it will be used, but we also have to think about how the students coming to us will already be familiar with artificial intelligence and what their expectations will be from us.

There were two recent Gallup polls. One of them had to do with how students saw our campuses, and the other one had to do with people’s concerns about automation taking their jobs. But one thing that caught my interest in these two studies was student concerns of not be able to get jobs with their degrees. We went through the thing where we wanted to make students computer literate. Now we’re talking about data literacy. All of a sudden, now we need to make them AI literate, and understand algorithms. Because, these will be the jobs they will work in, and it’s not just a STEM thing, not just for math and sciences. All disciplines will need these skills to contribute to an AI enhanced society.

Artificial intelligence has made some strong gains in law, in social services, and several other fields are entering the fold. We need the faculty in those areas to understand artificial intelligence so they can steer the curriculum in the right directions. If we put more artificial intelligence planning into our curriculum, we’re also going to help address the problem of people being concerned about this type of automation taking their jobs, because they will graduate with skill sets they’ll be able to use, whatever their discipline is, combined with some AI specific skills to function in a fuutristic society.

Talking about artificial intelligence is fine. If you’ve been in educational technology for very long or in higher education, you realize talk is cheap. You have to have money to do anything. So, where’s the money coming from for artificial intelligence? And that’s kind of tough one, and one reason why it’s a little tough is that artificial intelligence is not necessarily, how do I say this, it’s not a pure discipline. It’s a combination of things, so it makes it a little harder for research funding to come for artificial intelligence. But there are opportunities out there. Here’s one example. Andrew Ng, he’s a scientist at a place called Baidu. He’s a co-chairman and founder of Course RA. He’s an adjunct professor at Stanford. You know he’s one of the first people to have a huge MOOC, I think it had 100,000 students in it, and anyway, he’s a big name in artificial intelligence right now. He started a fund, $175 million fund to be exact for starting AI companies from the ground up. From zero to doing something. I think you will see more funding opportunities coming from some of these commercial players. Google and Facebook are both big into AI.

There are also funding opportunities coming from other areas. This one comes from Ventures.org. There’s some AI funding opportunities out there. Most of the AI funding opportunities right now are tied to research and development. That makes it a little hard for some of us. If we don’t do that type of research. But I think you will see it tied more into some AI literacies and combining it with different disciplines that wouldn’t normally be associated with AI technology.

Okay, now on to ethics. Let’s address the elephant in the room, which is ethics. Yes, there are ethical concerns with artificial intelligence. Blackboard, I’m sure you’ve heard of the company Blackboard for Learning Management Systems? They’ve brought together some thought leaders to develop a framework and some standards for the ethical and legal use of artificial intelligence in higher education. So there are already talks being held on this subject. There’s not a lot out there for us to determine what we need to do and how we need to do it, but once you have any serious conversations about artificial intelligence on your campus, you’re sure to get into the territory of ethics. And a lot of computer science programs right now are rushing to get ethics courses put in place for computer science. Maybe we’ve been a little behind on the ethics in technology for a while. I don’t know? At least we have people in our camp looking at this type of thing.

What kind of things could we be doing on our campus to help us with artificial intelligence? We need to think about data and privacy of data. One thing that’s going on right now, I don’t know if you’re familiar with it or not, but it’s the European Union general data protection regulation. Even though we’re not in Europe, that affects a lot of things about how companies are using people’s data. How they’re communicating with them, how they’re using their data, and it’s one of the stronger things that have come out for data privacy at this point. Of course, that’s affecting US companies that deal with European companies, but I think it will set the standard for a lot of things, like the kind of work Blackboard is trying to do with developing ethical frameworks. I think things like the European Union general data protection regulation will also play a big part in how we’re addressing ethics of AI.

Also in higher education, we need to be doing a lot more of our own research and development. A lot of what’s happening in higher education as far as AI is concerned is spawned from private companies. We need to think about our own research and development and not let that slips away from us. Also on the software side, it’s easy just to buy software that vendors have created, which is fantastic, and there are great opportunities for partnering with vendors on that, but we also need to look at some of our own developments. There are a lot of special APIs out there, like the Tin Can API that can move data from, say, Learning Management Systems and other systems. These API’s can be combined with machine learning to create innovative solutions. Even though something like machine learning sounds super complex, most of it is built on top of libraries that are doing a lot of the heavy lifting. It may not be as impossible as we think to be able to do some of our software development in artificial intelligence. As institutions, we need to consider some of that.

As a summary of what you can do to promote AI on campus, the number one thing: you need to get past the shock. You don’t want people when you’re talking about artificial intelligence to tell the robot jokes and talk in a robot voice. If you can get past the shock and normalize what artificial intelligence is on your campus that will go a long way in helping you talk about artificial intelligence and do things with it that are positive.

The other thing is, we know that generalized artificial intelligence is a long way off. However, it’s also moving shockingly fast. We are getting to that point to where we realize invasive AI a plausible reality. So, we need to think in terms of curriculum development. How can we get AI into the curriculum? How can we use artificial intelligence to help our students learn better and, have benefits for our faculty? Even though it’s a long way off, it’s coming faster than we think. We don’t want to just say: well, we’ll deal with that later. Now’s the time to do that, and we can do it. If we don’t shape the course of AI on college campuses, someone else will. We may not like those results.

Nobody wants to have a project, and it does not go well, and they get embarrassed. That’s why I like pilots. If you can put together any pilot on campus with AI, that seems to ease the tension of adoption. You might already have areas on your campus where you’re using artificial intelligence. Go to those areas. Learn what they are doing. Dissect it and make it less mysterious.

Most of what people think about when they think of artificial intelligence, they’ve learned from science fiction. It’s our job to bring this into the norm and not make people think this is science fiction. We don’t want to be doing a Sophia the robot routine on people where people are afraid of a robot apocalypse. We want people to see where the benefits are right now. Being responsible for AI is everybody’s responsibility; you can’t just leave this to the bigger commercial players. What I hope you carry away from today, is being able to take this message to your campus and include this in some of your technology initiatives, because AI isn’t going anywhere soon. It’s unlike any other disrupter we have seen, it’s our responsibility to embrace the full gamut of what it will bring to the future of teaching and learning.

Full presentation video with slide deck:

I recommend the following resources as a primer for understanding the current state and future of artificial intelligence in education.

THE NATIONAL ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT STRATEGIC PLAN

ROYAL SOCIETY REPORT ON MACHINE LEARNING

ARTIFICIAL INTELLIGENCE AND LIFE IN 2030

OPTIMISM AND ANXIETY VIEWS ON THE IMPACT OF ARTIFICIAL INTELLIGENCE AND HIGHER EDUCATION’S RESPONSE

RISKS AND REWARDS SCENARIOS AROUND THE ECONOMIC IMPACT OF MACHINE LEARNING

--

--

Bryan Fendley

Artist of the future, accidental scientist, consigliere of creativity.