Future Education: Analytics Trump Examinations in Gamified E-learning
A migration to analytics would bring efficiency to the education evaluation process. It would boost learner motivation. It would measure thinking throughout the year, rather than memories at year end. It would engender year-round focused work rather than periodic cramming. It would facilitate real-time remediation and continuous improvement 24/7 through the year, which would cumulatively result in enhanced intellect and greater achievement by year end. And it would allow the creation of multi-dimensional learner competency profiles instead of a simplistic score.
A teacher in a classroom cannot monitor each learner’s choices, actions and progress second by second every day, which is why evaluation defaults to periodic testing. Different teachers have different standards, so graduation evaluation defaults to national examinations. Importing the analytic detail of online gaming into the world of education would be a great leap forward. But for it to work comprehensively, learning itself would have to be self-paced and digital. And while learners would not necessarily all have to use the same elearning products, the derivation of the analytics within those products would have to be aligned to an agreed national norm. Using AI to do so dynamically should be an interesting project.
Examinations are as fundamental to the school experience as classrooms and teachers. It is hard for anyone to imagine education without them. For learners, teachers, government departments, universities or employers, the examination ranks, filters and accredits. It’s time that changed.
What’s wrong with examinations?
Examinations are a periodic batch-processed sampling of a learner’s knowledge and skills at a point in time, with outcomes extrapolated to represent the ongoing breadth and depth of their expertise. It’s a poor measure of ability and, being too late, does not help the learner to improve in, let alone love, the subject. The stress or anxiety created by the anticipation of an exam undermines the love of learning and degrades the efficacy of the assessment process. The teacher, as a subjective interface between curriculum texts and the learner, is charged with that warm fuzzy stuff like motivation, and in an ideal world it may be effective. But few learners live in an ideal world.
In industrial terms, exams are like a quality control process at the end of the production line — a concept superseded in the 1950s by Deming’s continuous improvement approach which monitors quality throughout the process. In 70 years that proven approach has not made the leap from business to education.
This is possibly because continuous improvement requires the ongoing gathering of data, while schools can gather data only intermittently. There’s a political obstacle too, in that data is seen by many teachers’ unions as a measure of the quality of teaching rather than of the quality of learning. And there’s the privacy concern — just how much of what a learner thinks or does needs to be invoked to judge their abilities, and how much is unacceptably intrusive?
Whatever the obstacles, the mechanisms of schooling to date have not facilitated continuous objective measurement of learner progress. It falls to each teacher to subjectively evaluate how each learner is doing and to take whatever remedial action is possible within the constraints of the classroom system. This is fine where the teacher is caring, insightful, competent and not overwhelmed by work volumes or learner numbers. Having learning synchronised so that each cohort moves at the same pace through the same material, makes a subjective intercession more manageable. But the process is teacher-expedient, not learner-centric — each learner learns within the time and attention constraints of the teacher, and the scheduling constraints of the school.
A common criticism of the exam process is that results are prone to distortion by last-minute cramming of subsequently forgotten content. But a more compelling reason to replace or de-emphasize exams is that they are detrimental to the learning process. First, school exams have been part of every scholar’s education nightmare for centuries. And fear is a lousy motivator. Second, learning to memorise is easier than learning to think, and the impact of the former on the evolving intellect of a teenager is not nearly as positive as the impact of the latter. Analytics systems in digital learning use second-by-second challenges and feedback to build motivation, self-awareness, confidence, commitment and learning momentum — the opposite impact of exams. The best teacher in the world cannot do that, not even in a one-to-one situation.
In video- games, analytics systems monitor the actions of each participant and reward them in real-time. With each microsecond new data is collected and new options and experiences relevant to the skills and performance of the participant are opened up. This makes gaming fast, compelling and endlessly motivating. How dull it would be if you had to work through a level for months before scores were updated or gameplay changed. Yet in school, that’s what the exam system does.
What is gamification?
“Gamification” is a contrived and imprecise term used for a range of approaches from the pedestrian to the sublime. Gamification is not about literally playing games, but about using the same mental challenge/reward mechanisms for learning which make playing games so compelling. Properly done, gamification is a sophisticated echoing of the immersive psychological states of gaming without necessarily involving what people normally think of as games.
The term “gamification” was coined originally to describe the adding of trivial fun elements to an activity in order to lighten the user’s experience. And, for the most part, that is where it is stuck. Most gamification is lipstick on a pig. A multiple choice test is dull, but “gamify” it as a quiz with appropriate sound effects and it becomes more interesting. A more complex alternative approach is game based learning (GBL), in which a participant learns as a by-product of playing a game. This literal gamification can be a lot more powerful, but it risks subliminalising knowledge and skill without learners overtly understanding what they have learned, or being able to apply it in a different scenario. Bloom would probably not approve.
The most advanced form of gamified learning makes the learning process overt within an immersive and compelling learner experience (LX). It blends structured teaching in a captivating, story-telling environment, with self-assessment and challenge-based application requiring cumulating competencies. This elearning architecture appeals to the same core drivers that create engagement in gamers: competency, autonomy and relatedness. To achieve immersion we must mentally remove the learner from their everyday classroom-teacher-textbook environment and project them into a more receptive and intriguing spatial presence. Keys to this are crafting situations which allow the learner to suspend disbelief, and fostering a high level of involvement.
In developing the gamified learner experience, the entire curriculum is re-envisaged as an interactive narrative, leveraging creative characters, scenarios, challenges and a mission that arcs through the academic year’s curriculum. The LX uses immersive engagement spirals, which help learners to master and apply the curriculum in order to advance in a mission. Ongoing micro-spirals of learn-practice-play-analyse-remediate-reflect keep the learner in flow, neither anxious nor bored.
The LX places the learner in a state known to game designers as cognitive flow, an immersive sweet spot where skill matches difficulty, goals are clear and feedback is real-time. It combines cognitive science with the neuropharmacology and psychology of gaming. Two core principles are invoked: First, brain chemistry makes us less excited by the successful outcome of an activity than by the anticipation of that outcome. We become more addicted to processes than to results. This is why, winning or not, a video-game player (or a gambler) just keeps on going. Second, a large reward a long time in the future is much less attractive than a small reward available immediately. The persistent micro-challenge/instant-reward cycles gain far more rapt attention than the possibility of a class prize at year end. Learning how to meet and beat those challenges gives learning an immediate purpose, while building to a greater goal.
Using analytics to provide multi-layered rewards and acknowledgements via learner dashboards, badges, boosts, Easter-eggs, upgrades and leader-boards is vital in this approach. Unlike examinations, analytics do not simply measure success, they continuously tune the learning process.
The challenge of accreditation
What of the ability to accredit? After all, final exams produce a finite quantified result which lets the gatekeepers of the next level up in learning, or potential employers, know how well the individual did in that particular test. Since exams are a sampling of the learner’s ability, results really should come with a statistical confidence interval — so you could say, for example, if the individual took multiple similar exams, in 95% of them they’d score the same within a 3% variance either way. Exams just ignore statistical significance and hope nobody will notice that a learner’s score may not be representative of their true ability.
Can analytics evaluate competence with any degree of credibility? Of course they can — the data can be parsed and modelled in any number of ways and, using the same algorithms for all participants, can produce the equivalent of a passing score — or several scores, for knowledge, say, and for skill, efficiency, ingenuity, or persistence. Analytics systems can track all activity, including time taken, research done, false starts, mistakes, intuitive leaps, and improvements achieved — all at a very granular level. Given the data, algorithms (and increasingly AI) may derive many of the subjective evaluations that teachers currently derive, but those evaluations would be comparable across an entire national cohort.
There are some disadvantages to replacing examinations with analytics. A recognised national standard may seem to disappear, replaced by something many would find hard to understand. But this is no reason for rejection. Educators also have difficulty accepting that proctoring becomes unnecessary if appropriate security systems are in place — an exam event is more susceptible to cheating than is a year of micro-monitored activity.
Disruption is not easy
The biggest downside is that the curriculum itself has to be rethought, and a compelling pedagogy friendly to self-study defined and architected. A new perspective on curricula has to include a regime of continuous challenges, monitoring and remediation. It also has to adhere to evolving best practices in micro-motivation. Current textbook publishers wouldn’t welcome that, since the dry linear way in which knowledge is codified in textbooks doesn’t work in an online experience-rich environment. Nor would they welcome being able to definitively determine by learner performance which provider has the most effective products.
Though digital infrastructure is still unreliable in some parts of the world, this is rapidly being resolved by commercial and government initiatives. With learner-owned smartphone ubiquity growing there is no need to supply learners with tablets — an expensive and unsustainable 20th century approach so popular with the governments of developing countries. IT, devices and connectivity are not the problem — budgets should rather be focused on creating educational content and teacher-independent learner experiences.
Some of the concerns about analytics focus on potential Big Brother intrusiveness or de-personalisation, and this territory would have to be carefully navigated and secured. Other concerns centre on the post-examination era role of educators and publishers. And others focus on infrastructural practicalities. None of these are unmanageable obstacles, and all pale relative to the benefits.
For digital analytics to be acceptable at a national level, there’s a lot of coordination and integration work to do. In a staged approach, an interim step may be to gamify evaluation only, with teaching taking place as usual. A national mobile story-based game for each subject, which runs the entire academic year, would enable learners to move forward as they master each element of that subject’s curriculum. This would give the sector time to acquire the expertise to up-level its digital capabilities — something which should have begun two decades ago.
But that’s not how disruption happens. The incumbents will drag their heels until outsiders arrive to eat their lunch. Then, almost overnight, the industry leaders will disappear, replaced by a few rising stars who deliver creative, compelling, dramatically affordable and profoundly effective solutions. And analytics will become the new examinations.
+++
Image credits: MindZu.com
I am a cofounder of MindZu, a STEM edtech company whose mobile learner experiences are driven by game psychology and analytics. I shipped my first elearning product via courier on a deck of floppy discs, and I founded an educational computer games company a decade later in the year the World Wide Web was born. For ten years I was both the official ‘elearning guru’ of the American Society for Training and Development (now ATD), and a judge of the Brandon Hall International Learning Excellence Awards. I have run elearning companies in Zurich, London, Washington D.C. and Cape Town.