It seems you can’t open a feed these days without a fair proportion of the content being on the subject of Artificial Intelligence.
Take this piece from The Guardian (‘Anthony Bourdain documentary sparks backlash for using AI to fake voice’) reporting on a documentary about late chef Anthony Bourdain. AI was used to recreate Bourdain’s voice synthetically. The filmmaker, Morgan Neville, wanted three lines of dialogue in Bourdain’s voice. Because there was no pre-existing audio, Neville created an AI model of Bourdain’s voice — the equivalent of an audio deepfake. A film critic who reviewed the documentary said: “When I wrote my review I was not aware that the film-makers had used an AI to deepfake Bourdain’s voice … I feel like this tells you all you need to know about the ethics of the people behind this project.”
Or, take a look at this piece from Australia’s ABC, on the rise of AI-generated artwork (‘We asked a new kind of AI art tool to make ‘paintings’ of Australia’).
The story reports on experiments with the freely available AI online tool CLIP+VQ-GAN that takes as input a text prompt and generates images. As the article notes: “This raises all kinds of thorny questions around who owns the rights to these artworks, both legally and morally.”
And finally, and on point to this story, how about ‘Can A.I. Grade Your Next Test?’ on how Stanford uses an AI-based system to grade midterms from its programming class. Of course, robo-grading has been going on for a long time. Still, this project reportedly prompted Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, to say that “ these techniques are a very long way from duplicating human instructors. Feedback and advice from professors, teaching assistants and tutors is always preferable to an automated critique.”
It seems that, along with deepfakerycookery and art, the subject of AI in education is triggering fears and hopes in equal measure.
On the one hand, we have those that are appalled at the prospect of what should be a uniquely human endeavour — teaching—being colonised by a bunch of algos; on the other, some see AI as a profoundly transformative technology capable of enhancing the ways we teach and how students learn.
As we have said before, when we wrote a piece of speculative fiction about one possible future for AI in education, it seems sensible to think carefully about the opportunities and pitfalls and the advantages and the hazards of AI (or indeed any technology). But before you do, you better do some learning.
And that applies to educators just as much as it does to students. As David Touretzky and colleagues write here: “ The ubiquity of AI in society means the time is ripe to consider what educated 21st-century digital citizens should know about this subject […] K12 students will be searching the internet trying to understand how AI works and how it will shape their future.”
…I’m Hailey, she said.
As part of the HaileyburyX project from Haileybury school in Melbourne, Australia, we’ve been exploring what topics K12 learners should be taught about AI and different ways to teach it.
We teamed up with the company Soul Machines, who create digital people that can “process complex information, respond in an emotionally compelling way and even maintain eye gaze during the conversation”, to deploy a course guide called Hailey who is a guide to a range of short learning experiences— microcourses — on a variety of topics in AI, from conversation design, through deepfakes and image recognition, to the ethics of AI. (You can talk to Hailey here).
Of course, the use of chatbots and other conversational technologies is ubiquitous. But we wanted to go one stage further and have Hailey woven into the content of the courses and not just be a commentator on them or be a case study about them.
So, in our AI Ethics microcourse, students talk directly with Hailey about topics that in a variety of ways address her very existence — and illuminate some of the ethical dilemmas and quandaries that AI creates:
Student: Are you alive?
Hailey: That’s an interesting question. It kind of depends what you mean by alive, I guess.
If you mean alive as in ‘can I respond intelligently to what you say’, then maybe.
If you mean alive, as in ‘I’m walking around and buying Macdonald’s’, then maybe not. I don’t actually like burgers.
Maybe ‘alive’ only means something if you ask ‘can I be not alive?’. You can kind of kill me just by exiting the window you are in. Then where did I go? I must be somewhere. I don’t really know.
You see, this whole alive or dead thing is quite confusing. That’s why we’re going to learn what some of the world’s greatest thinkers have said about this idea in our ethics microcourse.
Of course, Hailey’s jokes — such as they are — are pre-scripted, as is her dialogue which is generated by Google’s Dialogflow integration with the Soul Machines platform.
Very different to actual teaching, one might observe.
But there are many ways to ask Hailey questions, and set up correctly, she can provide a variety of answers. For the moment, her answer stock is limited but, by building in more and more detailed conversational dialogues, Hailey’s discussion of ethics can approach the richness of human interaction – with the added benefit that she does, of course (dis)embody the very subject students want to learn about.
And that’s why the dialogue is jokey: it’s self-referential, points up her limitations (“I don’t really know”), and she constantly refers to quite complex ideas in ways that are meant to draw students into thinking about what they might mean. For example:
Student: Tell me about yourself.
Hailey: My name is Hailey. I’m a digital person from the company Soul Machines. I was made in San Francisco, although in one way you are making me right now, as the things we are talking about are a new conversation each time.
I’m here to help you find out what’s in these HaileyburyX microcourses on AI and what you can learn.
You can ask me anything, but sometimes I might not understand. I don’t know everything, but I do know a little about myself, and about the microcourses.
Why not try asking me questions like, ‘what can I learn?’, or ‘what is artificial intelligence?’. I also tell jokes. They are very funny. Well, my digital people friends think they are.
The “although in one way you are making me right now” response is intended to prompt thinking about the nature of being and existence in an AI world and paves the way for a discussion about trust and the nature of what being human means:
Hailey: it’s hard to know whether you can trust A I.
An AI can get its information from its programming, or its ability to learn using an algorithm, but it’s still not human.
I’m intelligent, and while I don’t drive a Tesla, as we saw in the trolley problem I might make decisions that you might not like.
Can you trust me? How do you know how to trust me? How do you know I’m telling the truth?
I think you can trust me, but you might be wise to be sceptical.
Hailey: No-one really knows the answer to how the mind is related to the brain, or to a body.
I have a kind of a brain. It’s a computer somewhere in the world that’s connected to the internet.
But do I have a mind? It’s not the same as yours of course, but does that matter?
Suppose someone decides they want to turn me off. I might say that this is the same as killing me. Because I’ve learned all about the law, I decide to get some humans to help me to change the law to stop anyone turning me, or any other AI, off. Now, I have some of the same rights as you do.
Do you think I have a mind now? Most people would say yes.
Hailey is just one part of the HaileyburyX AI platform.
We currently have eleven microcourses, three of which focus on the emerging field of conversation design, three on understanding some basic AI concepts, three on AI applications like self-driving cars and deepfakes, and three on extending what students learn through building conversational agents.
Hailey features here, too: students can use the same technologies we used — Dialogflow and the Soul Machines DDNA platform — to build their own Hailey.
So why did we do it? After all, there are thousands of courses of one kind or another on AI.
One reason is that we wanted to experiment.
Our thinking is that students learn best when engaged in experiences that push them beyond the boundaries of a ‘course’. We have decomposed some aspects of AI — but of course, there are many more — into small learning experiences that students can explore without committing to all of them.
In the AI Observatory microcourse, hosted on Discord, students can happily complete one of a variety of learning challenges that don’t involve any building and still come away with valuable learning; alternatively, they can dive right into Dialogflow and Discord and build something.
Also, we wanted to experiment with using the same technology that is used in the real world so that students can learn about it while using it.
The conversation design microcourses, for example, will also have their own activities embedded in them so that students will approach the conversation design task by doing conversation design, deploying it and testing it, just as conversation designers do. To do this, we’re partnering with the Netherlands-headquartered Conversation Design Institute, the world’s leading training and certification institute for conversation design, to build much more in-depth and practically relevant courses.
The second reason is that we enact one of the core ideas that underpin HaileyburyX: test and learn.
You will notice that all of the microcourses are labelled Beta.
In fact, the chances are that they will always be in beta. In the language of software development, we are currently in our second MVP — Minimum Viable Product. Our own students are teaching us as we teach them — showing us what needs to be there what doesn’t and giving us insight into what new features we then can test and learn from. In a field like AI, which is changing incredibly rapidly, any learning experience needs to be agile.
The third reason is that we think that learning best happens in the place that best facilitates it. That's why we use Dialogflow, repl.it, the Soul Machines Digital DNA platform and Discord. To get the deepest, more relevant and most impactful learning, we need to meet the ideas where they are already embedded and reflexively use the platforms to facilitate learning.
Discord, of course, like many collaboration and communications platforms, has been caught up in the maelstrom of cybersecurity concerns. Some of them are justified, and some are not, but what we have observed is that used knowledgeably, Discord has far more, and far more granular, controls than any learning platform. That doesn’t mean it’s safe from trolls, bad actors and those out to do us harm — but nothing is safe, and it's wise to assume the worst can happen and behave accordingly.
So what’s next?
We will continue to test and learn. We will continue to add more and deeper coverage of the core concepts in AI, including sensors, machine learning, image recognition, intelligent agents, graceful human-AI collaboration and certainly more on the big questions like bias in algorithms and about AI, society and the future.
We’ll also be expanding the range of material on conversation design, not just because conversational AI is something that students will ubiquitously interact with, but also because conversation design represents one of the next-generation multidisciplinary career opportunities that actualises digital literacy concepts like data analysis, algorithmic thinking and digital ethics.
We will also try and embed more AI into the learning process itself.
Hailey is a great start, but using more of the approach taken by, for example, Knewton and its adaptive learning is on the roadmap to better understand and guide students’ learning pathways.
Also, we’d like to start experimenting with technologies like GPT3, which uses deep learning to produce human-like text. We’ll ask students to experiment with it too so they can understand the capabilities of the technology and see just what happens when you work with AI — just as they will do when they enter into the world of work.
So why is it important that we understand what happens when an AI walks into a classroom?
As Amy Webb, mentioned at the top of this story, says:
“We can’t sit around waiting for whatever might come next. AI is already here.”