Learn to Identify
This article is a write up on our journey of building this game from concept to production.
What is it?
Learn to Identify is a game meant to help parents teach their toddlers name of everyday objects and to speak them. This game has been developed on the android platform. It is available on the Play Store to download and the demo video is available here.
We are teaching our younger daughter, Anjana, to speak and to identify colours. In the course of doing that, my wife suggested that having an electronic aid might help all of us. We quickly discussed what would make sense to develop (roadmap and minimum viable product) and what would be the scope required to test it out with my daughter (proof of concept)
We sketched out a few designs on paper and then I developed the most promising sketches in Balsamiq. As always, I opened a new page in OneNote and used it to track my project and notes. We estimated that this would take about a day to develop, which meant doing it over the weekend. But like most project plans and estimations, we were way off the mark. The primary reasons being a lot of ‘learn and iterate’ circles, which we really did not budget for.
I developed the basic framework on day one, in a couple of hours. The first major hurdle was learning to work with Google Text to Speech. It took some time for me to even figure out what I needed to do research on. I was able to get the TTS working after going through the android developer site.
Proof of Concept Validation
We decided to start with colours and shapes (the most common categories for teaching). Developing the assets were easy in Clip Studio (which was overkill). We tested this with Anjana and it was a big hit with her. She loved seeing all the colours on the screen as well as hearing their names being spoken out, when she interacted with them. Once the concept was proven, the next steps were to go beyond colours and shapes and add more categories.
Then I hit a roadblock. How should I deal with images for other categories like animals or bird? Where could I find them? We could draw a few images but for the scale we were thinking of, we needed a more robust pipeline. We ended up doing google searches for “free for commercial reuse” images. We decided on the kind of images we needed, the categories we needed and the theme of the images. We used a combination of paint.NET and Clip Studio to manipulate these images for use in our game. Once that methodology was refined, it became a no-brainer activity. We ended up moving from conscious incompetence to unconscious competence., at least, as far as content was concerned.
At every point in the development, I would test it with Anjana on a small unused phone that we had.
The game, in phone mode, consisted of two screens — the home page with the categories and the game page, with the objects for the category that was selected. We observed Anjana at play. These are the ones that we prioritized.
· She wanted to swipe back to go from the game page to the home page
· Her fingers were too small and not coordinated enough to touch the images correctly
· She needed to see visual feedback for what she had touched
· She spent about a minute on each game
· Some images drove her to choose the same categories multiple times
Working with Feedback
We felt Swipe was important given that touch was an integral factor in navigation. I implemented Swipe in the game and it was there for a long time. But you would not notice it in the final release since I have removed it. Swipe was not working well across various devices as I didn’t implement it well enough. Another factor was that while swipe was needed in phones, it did not make sense in the context of tablets (as tablets had a single screen where both the home page and game page were shown at the same time). The third reason (and primary) was that I could probably spend my time implementing something else of more value instead of trying to make swipe work. After all, the Home and Exit icons accomplished the same thing.
Since she was not able to activate the images consistently, I increased the size of the images and the touch area. This issue is not fully solved since she has a tendency of using her finger tips to touch the screen, which does not work well on low end devices. This was better on my Nexus devices.
We realized that auditory feedback, while good, was not enough to reinforce the spoken word with the object on the screen. So we decided to add animations to the objects on touch. I did some research on animations, tried out a few and ended up at the zoom out animation, that is present in the final release.
Anjana got bored after about a minute in each category. Her behavior was to touch all objects in the category until she heard them all. Then she would touch a few which she liked. She would ask us to repeat the words. Then we would say the word and she would try to touch the correct objects. To keep her from getting bored, we figured she would need 6–7 categories to keep switching around. There is no long term solution to this boredom. These categories are short term, kids will get bored with them and move on.
We noticed that some animals, bird and colors drove her to the categories always. Our guess is that she likes the comfort of the known.
Performance and Image Management
While researching swipe, I wanted to test the hypothesis that swipes would be better user experience for moving across categories than navigating back to the home page and moving from there. I implemented this feature.
There were two challenges I noticed immediately. The first one (more important) being scale. The second was one of performance. The first challenge comes into effect when there are a lot of categories. Swiping to find the right category would be time consuming. This would mean that we would need to have a parent menu that could be swiped to make this effective. And this has been implemented in the tablet mode of this game.
The second challenge of performance is to do with loading and unloading of images. My nexus phones started lagging when I swiped across too many categories. To my limited knowledge, android needs a lot of coaxing to make this efficient and easy. This is when I started looking at performance and image management libraries.
I started off by exploring the built in android monitor in Android Studio. This showed the memory being consumed by the game. This was increasing exponentially every time I went to a new category. I then started researching this and I found people recommending Picasso, Glide and Fresco. After some more reading, I went with Glide. Glide is great to work with. It bought down my memory footprint from over 200 megs to less than 20, at that code maturity.
After this, I used Leak Canary and it didn’t seem to show more memory leaks. I guess that I did not implement it correctly.
Just for the record, the game uses images everywhere.
Coming back to the swipe conversation and the need to have a “carousel” of categories. The next logical step was to look at tablets and figure out how to integrate the carousel and the game i.e. showing both the Home Page and Game Page at the same time.
I read up on multiple fragment layouts and I created three — one for small and normal screens (almost all phones), one for large screens (7 inch tablets) and one for extra large screens (10 inch tablets).
I needed to modify my activity code a tiny bit to make the conversions necessary for the multiple layouts.
I was testing the app on a couple of physical devices — my Nexus 6p and Micromax A1. In addition, I was using the Nexus 10 and a generic 7" tablet emulator. Any visual changes were tested across all environments. Performance was typically tested on the A1.
Text To Speech Issues
I discovered that TTS has the following issues
- Does not have Indian English or Indian Languages as an option even though other google services have those
- Works poorly on low end devices. On my crappy Micromax A1, TTS took 10 seconds to initialize
- Notifying the user that TTS is initialized. There is no real way to ensure that this happens correctly.
I don’t have a solution to the first two. I minimized the third issue somewhat by initializing it in the activity code and then using it across fragments.
I wanted to do three things — get the right combination of colours and refine the UI. Get a nice introduction going. And put a video up on YouTube.
For introductory slides, I used the AppIntro library. We had to think quite a bit about the content that was coming there.
I tried a number of ways to screen capture the demo. In the end, I use AZ Screen Recording for my mobile and for my emulator on windows. Editing in YouTube did not work the way it was supposed to — the background music drowned out the sounds in the original video. So for video editing, I tried .
Final Upload into App Store
I had the content for the app store ready. It was only a couple of minutes to get everything there. But then there were two issues. All along, we had been having a debate about whether to make the app paid or free or free with in app purchases. In the end, we decided to go with the free model.
After releasing it on the store, I downloaded the app only to find that the images were not loading. After some analysis of the apk, (I unzipped it), I figured out that the “shrinkEnabled = true” option in my build.gradle (module) file was compressing my images to 67bits. This was clearly a bug. So I disabled the option, re-created the apk, re-uploaded it, downloaded it, tested it, found everything was working and got to working on this blog.
Our roadmap for this product is pretty much restricted to getting more content in. We might end up implementing In-App Purchases; this depends on the appetite for this game. A wild idea which we might explore is to see if there is a better TTS engine out there — especially one that deals with Indian local languages.
Stuff That Was New to Me
- Leak Canary
- Image Management
- Multiple Fragment Layouts
- Image Manipulation
- Text to Speech
- AndroidImageSlider (I ended up removing this)
- YouTube & Video Editing :P
- This blog was extremely useful
- I spent more time trying out stuff and removing code, than I did putting it in.
- Firebase Analytics is used in this game.