Testing, testing, 1, 2, 3?

Reflections on multi-stage testing from the pre-pilot of a Gamified Learning Measurement Tool

Earlier this year, we gathered a small group of children in Jordan together to test out a new digital learning assessment. It was a moment of fun and excitement for them and they were happy to play, read, learn, do maths and be reunited with friends after many months without school. Since the pandemic started, so many children have experienced disruption to their education that this conversation could have taken place in any classroom in the world. But these were refugee children and the disruption caused by the pandemic is not the only reason that they were happy to be learning; their education has been affected for far longer by displacement and conflict.

In recent years, there has been an increasing interest in how digital interventions such as e-learning might be used in humanitarian settings such as these. It has led to exciting innovations like War Child Holland’s Can’t Wait to Learn programme. More recently, the number of online tools and digital education projects has soared during the pandemic, when nine out of ten children globally experienced some learning at home due to school closures. It makes it a timely moment to look at the effectiveness of these kinds of initiatives on children’s learning progression and to learn lessons to refine programmes as they scale up.

Children testing the game

However, we found several barriers in using traditional methods to assess digital learning interventions’ impact on learning outcomes. Traditional learning assessments usually take place in a classroom, whereas many of our learners are home-based. They are also time-intensive, as already over-burdened teachers must mark and collate them. This is particularly the case for those working in over-crowded schools with large numbers of children, like many in humanitarian situations. Finally, they often lack important content on learners’ social and emotional progress, missing key information on how children are feeling and experiencing the learning process. This is especially important for children who have faced the upheaval, trauma and uncertainty of displacement and conflict.

Recognising these issues, War Child Holland, NYU Global TIES for Children and the HEA, with the support of Porticus, began to explore how effective a gamified browser-based solution may be in undertaking learning assessments. This resulted in the Gamified Learning Measurement Tool (GLMT): an interactive, fun, digital solution, providing real-time assessment data to teachers and accessible to learners wherever they are.

We wrote previously about the collaborative design phase of the tool and what we learnt about developing a gamified intervention that integrates all necessary elements of a valid formative assessment. We are now in a three-stage testing process in partnership with the War Child team in Jordan, which is feeding into the tool’s iterative design. We are learning many lessons in how to build this kind of digital solution, reinforcing the importance of multi-stage testing in the pilot process.

Why are we multi-stage testing before piloting?

Each stage of our testing process focuses on refining different facets of the assessment tool. At the Alpha testing stage, we focussed on making sure that the navigation was smooth, identifying technical bugs, improving functionality and looking at any tweaks needed to the visuals and narrative. From there, we refined the design and built a full prototype, ready to be further tested.

Game interface with the Bee character

Rather than going directly into pilot testing, we first wanted to Beta test the prototype to drill down into some core elements of the assessment tool. We wanted to understand:

  • Are there elements that are confusing or will be an obstacle in the pilot study if we do not address them?
  • How much support do teachers and children need to interact with the assessment?
  • How much support is needed for caregivers if we pilot at home?

Overall, teachers, caregivers and children all gave very enthusiastic feedback about the prototype and expressed that it is both a great support for teaching, and very motivating for learning. Central to the game is the main character, a bee, who motivates children to take the assessment, along with other elements of rewards and praise. Caregivers noticed and enjoyed these elements, citing them as motivational. Teachers confirmed that they would use it with their own students. They also spoke about the game’s usefulness as a learning support and in self-assessment.

“The idea of a bee that talks to children, motivates and encourages them, is very beautiful.” — Caregiver, Jordan

Teachers, caregivers and children also gave us invaluable ideas to strengthen the assessment tool to achieve its objectives. Testing in this way has had significant benefits in improving the piloting process:

  1. Testing with a small group showed bugs to be fixed that now will not disrupt the pilot testing. Alpha testing was run using an English version of the assessment tool with an English-speaking team. Before Beta, the tool was fully translated into Arabic to test with Arabic-speaking children, parents and teachers but that did mean there were a few bugs in the tutorials, such as layout issues making it hard to understand a few of the questions. Our aim is that by the time we pilot the GLMT, there will be minimal bugs and we can focus on learning about the content and usage itself. An example of this is connectivity issues, which we had considered deeply during design but only became evident when testing the tool in the field. We now have a chance to resolve these issues before piloting.
Game interface with avatars

2. It gave us an excellent chance to observe children using and navigating the tool. This gave us fresh insights into ways we can refine the design to make it even more accessible, both in functionality and content. Largely, the children found the game straightforward to use independently. However, we saw that children took much longer to complete the assessment than anticipated. We will make the assessment shorter for children. Our observations showed that the “Help” function should be even more obvious than we thought it needs to be. Children had a tendency to also look to adults for help rather than seek it out within the game and so how to get help directly within the game needs to be signposted and incentivised. We also learned that social and emotional learning (SEL) surveys need to be pitched at appropriate levels for different age children. Whilst children over the age of 9 showed enthusiasm and a deep engagement with the SEL questions, younger children had a much simpler interaction with the SEL content. Based on this, the team will adapt this section for piloting to make it inclusive of younger children.

“A feeling of happiness was observed among children of the age group 9 and above when they started this section and rushed to answer its questions.”

3. Prioritising what you need to find out at each stage of testing, and planning the time well to achieve this, is important. We were ambitious in what we wanted to discover at the Beta testing stage. Given the time taken to complete assessments, there wasn’t sufficient time to also gain feedback on the teachers’ dashboard, where scores and assessment results are shown. The dashboard also hadn’t been translated into Arabic before Beta-testing began but this will be completed ready for the pilot stage, so we can walk teachers through the entire dashboard in focus group discussions.

4. Tutorials and training for gamified solutions are in high demand. In the current prototype, there are student tutorials, which we will continue to develop. These are short explanations embedded in the game on how the platform works. Next year, we will also develop tutorials for the teachers’ interface. As for the training, our plans to develop training sessions and materials were affirmed by feedback from teachers and caregivers. The training will focus on how to use the tool and how to link it to teaching practices. We are already preparing a caregivers’ video to explain their role when their children take the assessment from home.

Game interface with question and answers

What happens next?

Alpha and Beta testing have prepared the ground well for us to pilot. The testing has revealed some very useful points for us to bring back to the development team, working together to adapt and improve the content and functionality of the Gamified Learning Measurement Tool. Based on this experience, it is likely we would build pre-pilot testing of prototypes into future gamified solutions. In our goal of exploring the effectiveness of a gamified tool to assess learning outcomes, it is an encouraging start that the responses of teachers, caregivers and children were so positive.

Now the iterative design process continues and the feedback from Beta testing will help us improve the tool. As a collaborative programme, partner communication is a key element in this. War Child Holland, HEA and NYU TIES will pursue their close collaboration to adjust the content of the tool alongside the tech company developing the game. At the same time, the team is working to prepare for pilot testing with the War Child team in Jordan.

We are interested to hear from teachers, caregivers and children at the pilot stage about changes we have made based on Beta feedback and whether it enables children to use the assessment tool without any external help. We will be looking at the teachers’ perceptions of the dashboard and piloting with them the new functionality of building assessments themselves by using their own content.

The pilot testing stage will also focus on how the math, reading and SEL scales in the assessment tool translate between home and face-to-face conditions and it will explore the comparative outcomes using the Gamified Learning Measurement Tool and a traditional paper-pen based assessment outcome tool. Whilst there is much work to be done, we highly anticipate this next stage in our scaling journey.

--

--

Humanitarian Education Accelerator
HEA Learning Series

Education Cannot Wait-funded programme, led by UNHCR, generating evidence, building evaluation capacity and guiding effective scaling of education innovations.