TipTop App for Children — From Web to Unity

Luigi De Rosa
epicagency
Published in
7 min readNov 8, 2019

This article was written by Luigi De Rosa and Thierry Michel.

The province of Liège approached us to come up with a tablet app to tackle physical and mental health issues in children. The app will be deployed in all primary schools of the province.

From web development to Unity (technical choices)

During the pitch we had imagined a whole series of features which required us to make technical choices.

The first step was therefore to do a little R&D, especially since we had to give the client specifications on the purchase of the tablets.

Web vs Native (Unity)

At EPIC we’re familiar with web technologies and over the years we’ve shipped quite a few web games and experiences using web technologies like HTML5 and WebGL.

So it was natural for us to initially opt for a PWA approach. However, we soon realised that the web platform wasn’t quite up to the level we needed. A lot of standards we wanted to use were in fact incomplete or had not yet been implemented (WebXR, Bluetooth API…).

We then decided to go full-native using Unity. We’d been observing this engine for a while now, and decided to give it a go on a production project!
So far so good, but it wasn’t all plain sailing in the beginning. Being used to developing web applications, we soon realised the big difference in developing using Unity for native platforms:

  • 👍 C# is actually easy to grasp especially if you have some experience in Typescript
  • 👍 You can edit stuff while the app is running

*HOWEVER*

  • 👎 Debugging and deploys are slow and painful
  • 👎 Testing AR is a torture
  • 👎 There is a lack of solid open-source libraries
  • 👎 Good luck with Git LFS and conflicts… we tried following best practices and all, but we had quite a few issues nonetheless
  • 👎 Applying classic design patterns is pretty hard when you have half of the app done with code and other half made with GUI
  • 👎 Unity can crash easily… and often does multiple times a day

Android vs iOS

At first, ARKit or ARCore seemed more or less equivalent and so we chose based on other criteria.

Android had a broader choice of hardware and simpler deployment and updating processes …and from the list of supported devices by Google, the Samsung Galaxy Tab S4 appeared to check all the boxes.

It is certain, however, that this choice will be re-evaluated for the next project.

Funny story:

We were having huge issues running ARCore on the tablet, with massive lags and lack of stability in feature detection tracking.
We spent days debugging this, without figuring out the problem.

We were lucky to get assistance from one of the Google engineers working on the ARCore project.

“If this is indeed the case with your device than it would be the first actual real-world case I’ve heard of. So unless the device’s issues were due to a (hard) drop, then I’d say very unlucky.”

Long story short, we had the huge misfortune of having a misfunctioning device during the development phase. We replaced it and as if by magic everything started working successfully!

Once we decided to go for Android, we concentrated on the key functionalities: recognising the images and it’s distance.

Augmented reality vs image recognition

True augmented reality or simple image recognition by markers?
We tested a myriad of solutions: ARCore, Image Recognition, Wikitude, ARToolKit, Arcolib, EasyAR, Maxst, and the list goes on.

When one wouldn’t work at all, another, although super stable, would cost twice as much as the whole project itself.
Welcome to the “3rd party” party!

While simple image recognition gave great results for one or two images, the detection of 4 images simultaneously posed enormous problems for instance, the size of each image on the screen was no longer large enough.

The advantage of a real augmented reality solution was to be able to detect a close-up image, leave a marker in the environment and then move onto the next image.

The (major) inconvenience: it’s not enough to place the tablet in front of the image. In fact, without a minimum amount of information concerning the real environment, ARCore can neither detect the images nor estimate distances, scales, etc.

A preliminary ‘scan’ test is therefore necessary, although for excited children this brings about other difficulties in terms of user experience (tutorial, explanations, videos, overlays, audio instructions…?). It’s not as easy as that to support the user from a distance.

UX

The project was very ambitious not so much from a technical point of view, but because of its target audience: 8- and 9-year-olds playing together in groups of 3 or 4.

We had to make sure to:

  • Give a chance for each child in the group to take control of the tablet at some point
  • Make the UI easy to use
  • Delivery all the “educational messages” in a playful way

We teamed up with psychologists and teachers from the province to build the experience and interface of the app.

Here’s just some of the UX solutions we adopted for:

Each game round is personalised according to the particularities of each school and the children playing .
Through a secret touch combination, admin can access the dashboard, where they can gather information about the latest session, change parameters and run test/debug sessions
In some parts of the app, the children need to put their finger on the screen to unlock a mission.
Souvenir! At the end of the game, our heroes can take and print a photo with their personalised totem and game characters.

TipTop

No bickering, it’s TipTop that decides.

In the game we added an older brother type character called TipTop His objective is to help our heroes achieve the various missions.
He iexplains the rules, defines whose turn it is to play and helps or gives guidance when needed.

We created TipTop from a 2D illustration, then modelled him in Blender, and animated him through motion capture technology (thanks to our friends at Mysis!)

Work in progress vs Final rendering of the 3D model
Recording session in Brussels. We gave TipTop a voice, recording various dialogues and texts.

3D

We used Blender extensively in order to create videos, images and 3D model assets to be used in the game.

Here’s some of the assets we created:

Fluids in Blender
We created a set of cards that will be used as augmented marker in AR

Build the radar

For some missions in the game, the children are required to find special objects hidden somewhere in the school. We used Estimote proximity beacons which uses Bluetooth Low Energy protocol.

Reading the bluetooth RSSI allowed us to estimate the distance between the tablet and the beacon. As the RSSI value is pretty unstable by nature we tried to remove the noise using a Kalman filter implementation. We ported it in C# from a JavaScript implementation.

A Kalman filter can help to reduce noise and sudden changes in the RSSI (credits)
“Whaat is this?”

Final game

The challenge: how to have a collaborative game which is played in real time using augmented reality.

It’s not rocket science it seems. A Raspberry Pi creates a Wi-Fi access point and embeds a small Node.js server with WebSocket. 😄

Welcome back to the “third-party” party. Another difference between “web-vs-unity” surprised us: the community spirit. Sure, there are forums, mutual help and so on, nonetheless it’s far from being open source.

And for good reason. We can straight away sense that we are in a profit making environment where there’s potentially a lot of money being made, all the more so given the monopoly in the market. ¯\_(ツ)_/¯

In short, we went back to Google and/or Assets Store to find the perfect package.

Regarding the server, it gave us the opportunity to use TypeScript.
As for the client… we discovered “threads”!!!

In JavaScript, the asynchrony is managed with promises and callbacks. But in C#, between Unity and .NET, how to execute code from a background thread within the main thread of Unity?

Goodmorning coroutines, invoke and other queues!

User testing

Testing the app in the real world was essential. Here are some “behind-the-scene” photos from user-testing session.

User testing day in a school
Selfie time! “Funny faces” are required to unlock one of the mission.

This project was huge fun and kept us busy for some months.It allowed us to leave the “invisible space” of the web, and create something more tangible where we could directly see the impact on the end users.

--

--