credit: Ex Machina, Universal Pictures International

What does AI mean for Education?

Graham Brown-Martin
Learning {Re}imagined
9 min readMay 26, 2016

--

Why are we training kids to compete with machines?

I was struck by a statement in this promotional video for IBM’s Watson AI technology that said,

“this generation of problem solvers are going to learn much faster with Watson”.

What will you do with Watson?
IBM Watson and Pearson

In the 30 or so years of working with digital platforms across the education and creative sectors I’ve noticed that these sort of claims appear every time a new bit of tech arrives. Watson, of course, is very smart technology. It hasn’t passed the Turing test but it did beat the human champions on the TV trivia game show Jeopardy!

Winning at Jeopardy! proved Watson’s chops in natural language understanding (NLU) which means that you can ask it a question in a human language and it will respond quickly with an answer from a database of facts. It achieves this with some impressive computing power. Designed to answer questions within 3 seconds Watson’s main innovation is its ability to quickly execute more than 100 different language analysis techniques to analyse the question, find and generate candidate answers, and ultimately score and rank them. Watson’s knowledge-base contains 200 million pages of structured and unstructured content, consuming 4 terabytes of disk storage. The hardware for Watson includes a cluster of 2,880 POWER7 processor cores and 16 terabytes of RAM, with massively parallel processing capability.

Now that’s not the kind of technology that you carry in your pocket right now and the conversations around artificial intelligence aren’t new. Interestingly the widely credited pioneers of AI, at least in the field of education, were Seymour Papert and Marvin Minsky from MIT who were working on this stuff since the 1950s.

The reason that AI has returned to vogue is that it’s incredibly computing intensive but we can now offload all the processing power to do this to the cloud whilst using a portable device such as a smartphone or wearable as an interface. It’s this type of approach that makes possible the translation and voice recognition systems that we are beginning to take for granted.

It’s also why some of us get the jitters when we discover that our voice controlled devices or children’s toys are acting as surveillance devices. Throw enough computers together in the cloud and give them enough data and almost anything is possible.

The Turing Test on the other hand is currently the gold standard for AI with the idea being that without knowing which is which a judge would interrogate a human and artificial entity to assess their problem solving capability by being exposed to the actions of both and to the outcomes. Where the judge can not tell the difference between the two the machine is regarded as having passed the test. The test takes its name from Alan Turing who in his seminal 1950 paper “Computing Machinery and Intelligence” raised the question “to what extent can a machine think for itself?”.

Turing wasn’t the first to ask this question. For example, René Descartes in his 1637 Discourse on the Method asked similar questions and ended up with the philosophical statement “I think therefore I am”. This really boils down to consciousness which is significantly different from what we mean by “weak AI” which by its nature is a simulation of intelligence rather than consciousness. It’s the kind of non-sentient intelligence that a chess computer game uses.

Before we go down the rabbit hole of defining what we mean by intelligence it’s worth noting that the Turing Test is flawed given that humans aren’t great problem solvers by comparison when it comes to things like calculating or information retrieval. Yet we spend years of our children’s education pointlessly training them to compete with machines in this regard. The same is true of retrieving facts with precision and accuracy hence the Watson win at Jeopardy!

Strong AI” on the other hand purports that a correctly written program running on a machine actually is a mind. The trouble with this idea is that it assumes that the human mind is merely a computer and that our thoughts are like software. The theory goes that if we just have sufficient computing power and the right software to emulate the mind the machine will achieve sentience. This, of course, is the stuff of science fiction although there are people such as Google’s Chief Futurist, Ray Kurzweil, who believe that we’ll be able to achieve immortality by uploading our consciousness to computers by 2029. Well, it must be true, it was in Playboy magazine.

I’d suggest that Kurzweil and his peers are driven by a misguided metaphor. Senior research psychologist Dr Robert Epstein argues that the human brain isn’t a computer. He cites the work of AI expert George Zarkadakis who describes 6 metaphors that people have employed over the past 2,000 years to try to explain human intelligence ranging from the biblical humans were made from clay and infused with “spirit” through Descartes’ assertion that humans are complex machines to the comparison to computers that emerged in the 1940s. Essentially each metaphor reflected the most advanced thinking of the era that spawned it. Epstein argues that at some point in the future, when our technology advances, we will discard this human brain as computer metaphor in the same way that we discarded the notion of the hydraulic model of human intelligence that we held on to for some 1,600 years from the 3rd century BCE.

My position on this, and it’s really based on intuition rather than expertise, is that “Strong AI”, i.e. machine consciousness, isn’t going to happen anytime soon and even if it did it would raise huge ethical questions such as how you dispose of “self-aware” prototypes. I also think there’s a conundrum in that once you know how something works it doesn’t seem so intelligent anymore and thus it’s no longer “Strong AI”.

So what is all this AI related news that we keep reading about in the media and how it’s going to take all our jobs?

Well, here’s what I think. Western society is in transition as it reaches the logical conclusion of an industrialisation process that started around 1760 where we’ve built smarter and smarter machines that replicate and replace anything that we can measure. From the railroad and cars replacing horses to factory machines that transform craft production to mass production. Whilst a future that contains sentient machines and Strong AI is uncertain, what is certain is that the computer processing capability to provide NLU and instantaneous fact recall and simulated problem-solving is just around the corner relatively speaking. Perhaps no more than 10 years away.

Whilst some of this will appear to be intelligent it will be like a sophisticated chess computer, i.e. it will still be what boffins call “weak AI”. But tasks that rely on measurement, rapid fact recall and analysis will be replaced by AI. In a sense the last couple of hundreds years of industrialisation and the capitalism upon which it was built have been leading to this moment.

The AI that we read about in sales brochures, promotions or news broadcasts are just algorithms that simulate intelligence very powerfully and often based on huge datasets. Soon almost everything you buy, at least digitally speaking, will boast about it’s AI capability but let’s be absolutely clear about this. It’s a simulation with all the same biases that anything else that originates from the human mind and hands contain.

When we hear about AI being used for education and learning, and we’re going to be bombarded with this in every sales presentation going anytime soon now, it’s from the perspective of last centuries understanding of what school and education is for. What I mean by this is that AI assisted technology will be designed to process students towards passing tests and where possible to replace teaching staff.

Given the volume of teachers leaving the profession as well as the substantial increase in demand for teachers as the world attempts to meet the UN’s commitment to the Sustainable Development Goals (SDGs) replacing teaching staff with machines seems like a pretty good bet. Machines don’t unionise, don’t get sick, don’t suffer from stress, don’t need a salary and are 100% consistent in delivering a curriculum and testing assimilation. What’s not to like?

Ed Rensi, the former CEO of McDonalds, has already suggested bringing in robots as thousands of McDonald’s workers demand a union and $15 an hour minimum wage at a shareholders meeting.

In the education world we’re already seeing commercial organisations, for example, Bridge International Academies who are actively pursuing strategies against teaching unions in countries where they can get away with it. Some argue that this is to disrupt the status quo whereas others argue it is away of reducing costs and the quality of provision.

Recently a university teaching assistant was replaced by an AI without students even noticing which, in my opinion, speaks volumes about our outdated approach to education.

But let’s look at this from a different perspective.

What if the AI was student-centred and every student had their own personal AI that they had grown up with and learned to work with to solve complex, abstract problems?

Well, I’m pretty sure that such a technology would, like smartphones today, be banned from many of our classrooms and definitely from the examination hall. Yet the world that these same students, the ones that are in school today, will join will be one that is augmented by extremely sophisticated “weak AI”. AI that can understand and respond in natural language, that can retrieve facts, information and analyse it far faster than any human mind.

As humans our only advantage over these machines is that we do, in fact, possess “strong AI” and yet we have an education system that demands we compete with the “weak AI” of machines. To me, this just doesn’t make sense.

One has to ask at what point do we “jump the chasm” and accept that students will be growing up with personalised AI systems that will help them navigate their world and massively amplify their problem-solving capacity. Or will we continue to pretend that this century and its affordances stop at the school gates so that we can tacitly maintain the business models of corporations from a bygone era?

Please touch the 👏🏼 symbol to recommend this story so that others in your network see it and I will feel joy, and don’t forget to follow so you don’t miss further updates. Please share via your favourite social networks.

Unless specifically stated, opinions and points of view shared are my own.

I talk for money, if you’d like me to present this work in a keynote for your conference or meeting please get in touch.

An entertaining & thought provoking slayer of sacred cows, Graham Brown-Martin works globally with senior leadership teams to help organisations adapt in the face of rapid change & innovation. By challenging entrenched thinking he liberates teams to think in new ways to solve complex challenges. His book Learning {Re}imagined is published by Bloomsbury and he is represented for speaking engagements via Wendy Morris at the London Speakers Bureau.

--

--