Agent ARnold: An Augmented Reality game for children with Cerebral Palsy

Using mixed reality and physical props to train kids’ fine and gross motor skills

Aveek Goswami
Antaeus AR
6 min readNov 8, 2023

--

As part of my 2nd Year Design project, my teammates and I collaborated with The Pace Centre, a UK-based charity and school for children with neurodisabilities, to create an AR app to aid in the development of children with cerebral palsy. This article will focus primarily on the software aspects. Find out more about my team at the bottom of the post.

About the project

Cerebral palsy is a lifelong condition affecting movement and coordination, and The Pace Centre is on a mission to embrace cutting-edge assistive technology, from AI-driven classrooms to wheelchair simulators, and now, they’re delving into the realm of augmented reality (AR). AR, which immerses users in interactive, sensory-rich environments, is showing remarkable promise in improving learning skills for individuals with special needs. Working with the Pace Centre, our team designed an innovative AR-based game which is not only a fun experience but a powerful tool to sharpen their motor skills and cognitive abilities.

Game Design

Our final product, Agent ARnold, is an augmented reality game where players become the assistant to a detective searching for a mad scientist at the Pace Centre. It’s an interactive treasure hunt that involves finding and scanning props around the school. Questions related to the props and the school curriculum, with subjects and difficulty levels chosen by teachers, provide an educational twist to the game.

Hardware (Brief)

The game involves the use of props that are hidden or scattered around the room. During the game, these props will be picked up and questions will be asked about them at different stages of the game, depending on the subject and difficulty of that level.

3D printed props: A flask, snail and Petri dish

Software

The user interface and augmented reality features of the app were implemented using Unity.

Gameplay

The game has a relatively simple interface detailed in the pictures below. It includes a login page, settings page for customisability, and pages for selecting the level and subject.

Login and Settings screens
Subject and Level Selector screens

Game Mechanisms

The Vuforia Engine was used to implement advanced computer vision and image tracking functionality in the game, while scripting in C# enabled control of both the virtual and physical object interactions. Below I will just be running through a couple of the more important features used in our game for various questions.

Prop Detection

We wanted the children to train their motor skills by picking up the props and putting it in front of the camera for it to be identified. We initially attempted creating a 3D model target using the Vuforia model target generator and training it on the CAD files for our props. This showed initial promise as shown by the video below, but it was too susceptible to interference and lighting conditions that we decided not to implement it in the final game. Instead, we attached QR codes to the props and created image targets to identify the props instead, as shown at the start of this post.

3D model target using Vuforia Model Target Generator (MTG)

Virtual Buttons and Animations

A common feature used in our game was Virtual Buttons, which superimpose buttons from the screen onto the surface being shown in the camera with reference to an image target. The user then “presses” the button by physically covering the virtual button, which is highlighted on the screen, hence harnessing the motor skills of the user. Pressing the button then modifies the playing field through an animation for example. Some examples are shown below:

Users use their fingers to point towards the objects, covering a virtual button in the process and activating either a colour or making an object appear. The button is placed on the ground plane. This feature was implemented by Michael. (The pictures will be replaced with GIFs)
Another example: covering the virtual button labeled “MOVE” makes the snail move towards the plant to complete the challenge.
Some code snippets to show how the snail and virtual button were programmed to play the animation

More Object Scripting

For the English subject questions, we asked the same questions for every level but changed the method in which students answer the question. The first level involves just tapping the answer on the screen while level 2 requires dragging the object on screen with your finger to the blank. Level 3 requires the highest dexterity and motor ability, as students need to put their hands in front of the camera and drag the virtual object ‘physically’ to fill in the blanks in the question, as shown below (will be uploading a GIF).

English question and code that implements the Level 2 feature: users drag the item on screen. This code was written by Ziyao

Final Thoughts and Future Developments

This game was developed as a prototype to showcase our idea for a game that The Pace Centre may consider to aid in its education and programs for children with Cerebral Palsy. Naturally, it has limited functionality and basic user experience but it serves as a proof of concept for a more advanced game that can be developed with similar concepts. The prototype was well-received when demonstrated to representatives from the Pace Centre and professors from Imperial College.

Some steps for future development would include:

  • Fine-tuning the 3D model target detection for better object recognition
  • Adding more questions for other subjects and levels
  • Including a scoring metric to track student’s progress and results
  • Making the game more customisable for teachers
  • Storing the data, students results and information from the game in a database that teachers can view and analyse

Acknowlegdgements

This project was made possible thanks to invaluable guidance from our supervisors, Dr Ian Radcliffe and Dr Faraz Janan from Imperial College, as well as Luke Thompson from The Pace Centre who worked closely with us to tailor our product for it’s users.

About the team

The software team for this project included myself, Michael Ma and Ziyao Dong. The team leader for our project was Rachael Soh. You can find us by clicking on our names!

Full list of team members: Rachael, Aveek, Michael, Ziyao, Wei Han, Jenny, Ella, Aris, Rory, Lucia

Full link to the github repository for the project can be found here:

Connect with me on linkedin

Aveek Goswami : https://www.linkedin.com/in/aveekg00/

--

--

Aveek Goswami
Antaeus AR

Imperial College Computational Bioengineering Student and Deep learning Engineer. I write about machine learning and software product development. And more