If the future of intelligence is artificial, what is education for?

Andy Powell
Foundations
Published in
4 min readAug 15, 2020
Image by @franckinjapan on Unsplash

I just stumbled across a couple of blog posts by Graham Brown-Martin. The first entitled Why don’t you design a school?https://medium.com/regenerative-global/why-dont-you-design-a-school-19409e6316db and the second entitled University as a Service (UaaS)https://medium.com/regenerative-global/university-as-a-service-uaas-b729667b216e.

Both are wotth reading and both touch on issues related to changes in education due to the fourth industrial revolution.

In my opinion, the university one spends too long using Napster, Apple and the music industry as an analogy for what might happen in education. I don’t think the analogy works and I think that Graham ultimately makes the same point. The school one includes a YouTube link to a panel debate at CogX 2020 between Graham, Sir Anthony Seldon and Priya Lakhani (chaired by Alex Beard). Code of Ethics: How do we safely navigate the world of Edtechhttps://www.youtube.com/watch?time_continue=2424&v=hAyX6PwN8KE&feature=emb_logo.

Again, worth watching.

I can’t say that I agree with everything in the panel debate, and it is worth noting that it doesn’t focus exclusively on ethics, but the big takeaway from it, for me, is that it is pointless to think about the application of AI in education without fundamentally reconsidering the purpose of education in the modern age. If we simply see AI as a way of delivering the current 20th century education system more effectively or efficiently than we are able to currently, through personalisation and better analytics for example, then we are missing a trick.

In the panel debate, most of that thread is concerned with the purpose of schools but I think it applies equally in the context of HE and FE.

Part of the debate is essentially about “why do we ask children to remember facts?” and “why do we get children to pass traditional style exams?” and, actually, I can think of lots of reasons for at least the former.

I remember when I left school (yes, it was a long time ago) the debate about whether students should be allowed calculators in O-level exams was just starting with the two sides lined up behind a range of arguments from, “well they are going to use calulators in the real world so why shouldn’t they use them in exams?” to, “multiplication is an important cognitive skill and it is beneficial to both learn it and test it even if the context of its application is changing”. I tend to side with the second position more than the first.

I’m pretty sure that my own kids went thru their schooling without ever properly being forced to learn their times-tables — at least not in the same way that I was forced to learn them. It was learning by rote and had fallen out of favour at that particular stage of the education system. It may have come back into fashion more now. But actually, having the product of two numbers immediately in your head is a really useful skill (we all get it eventually), and helps us to ascertain whether the numbers and formulas we have just plugged into PowerBI are actually giving us a reasonably sensible looking answer.

In the video, Anthony Seldon refers to the ‘knowledge’ (the detailed mental picture of the map of London that all London taxi drivers have to learn) as a thing of the past because we can now rely on GPS and Google maps. Well, no, actually… I’ve been in taxis where the driver clearly only knows where they are going because they are following their phone and it doesn’t inspire all that much confidence. There are plenty of subtleties to navigating the road system that Google’s AI engines haven’t totally mastered. Of course, there is knowledge available to Google (the fact that traffic is stationary two streets away) that isn’t available to an individual driver and so the ‘right’ answer is probably a hybrid one. AI augments our intelligence rather than replacing it.

I’m reminded of my most recent ‘educational’ experience, which was re-taking my AWS Solution Architect Associate exam (this probably takes about 20 or 30 hours of study, coupled with significant real-world experience andso is not a huge educational undertaking). But it is interesting that the exam has significantly changed now compared to the last time I took it: from a fairly straight-forward ‘regurgitating facts’ type exam in the past (“What kind of storage is AWS Glacier?”) to a completely scenario-based exam, (“Company X needs to store their legal documents securely for at least 10 years, which of the following options does that most cost effectively?”).

It requires you to have both remembered the knowledge and be able to apply it to real-world situations. Of course, the exam doesn’t reflect the actual real-world situation — where you would typically be solving problems collaboratively with a combination of colleagues, the customer and AWS documentation (and possibly even AWS support staff — knowing when to ask for help is possibly the most important skill of all and one that a surprising number of people haven’t really grasped in my experience!). But it is a step in the right direction.

I think that Graham wants an exam system that properly reflects the fact that the real-world application of learning is inherently collaborative. I have no idea how one would do that and, if I’m honest, I’m not yet totally convinced that it is completely necessary to go that far. But, as he argues, in the debate, if an AI can now pass an 8th grade SATS test, why on earth do we still ask real live children to take that same test? At some stage, the nature of what we are testing, and hence the nature of education itself, has to change.

--

--