SpaceNest AR — Product Study

Ankit Passi
NYC Design
Published in
18 min readFeb 1, 2021

--

Brief: SpaceNest AR is an AR-enabled application that lets you explore our solar system and learn information about all planets.

Note: This app used Google’s ARCore and developed using Unreal Engine 4. Image textures of planets and Mars Rover 3D Model are taken from Official NASA’s website. No copyright intended for any textures and 3D Models.

Download SpaceNest AR on Google Playstore

Challenges to Undertake

I explored many websites and apps to see what has already been done with this medium and its services.

After trying many apps, I realize that almost all the apps are built in Unity, a game engine that provides an easy solution for developing AR applications.
In its direct comparison, Unreal Engine — which already proved to be a powerhouse on creating immersive Desktop experiences, has absolutely nothing of value to offer in this medium.

As I explored these apps, I realized that even though some impressive apps don’t feel like actual AR applications, most of them are 3D Models overlaid over an open camera.
When you move the camera, the 3D Model just rolled with it, which I feel is not true AR, and by talking to the same developers, I guessed it right.

Though a popular medium nowadays, AR is still a very new medium, and UX understanding of this medium is still not that much developed. I wanted to test available insights and see their UX implications.
Though new design tools claim that they support AR UI wireframing, I also wanted to design a 3D UI using a 2D Tool and see how it transforms into an actual design.

Finally, many open-source libraries offer free educational content in 3D Models and encourage children to learn about many things — ranging from Pre-historic Animals to Outer Galaxy planets.
I wanted to create a free to use, educational app that combines content from all these libraries and allows children of any age and background to explore and learn about these all great things in a new medium.

So all of these became challenges for me to undertake:

1. Create an AR application using Unreal Engine 4 and try to use its AR and Graphics Capabilities.

2. Create a True AR App, which uses core AR functionalities offered by ARCore and supported by Unreal Engine 4.

3. Research and design for Augmented Reality UI’s and understand its UX understanding.

4. Create a Free-to-use, Educational app that allows learning of anything in a new medium.

My Role and Project Timeline

This Project was a solo venture, so I researched, designed, created front-end, created backend, tested, published on Google Playstore, and iterated further based on the insights on my own.

Even though, It was an ambitious project that I wanted to do for a long time, It is still a side-project. So I cannot give 100% of my attention to it every day.
So I created a schedule for it, according to which I can only work on this project on weekends, I can plan things all week but only work on real stuff during the weekend.
This planning was important because I wanted to keep my mind calm and at 100% for my day-to-day work and keep the excitement of this project alive at the same time.

It took me four months of weekend work to publish the first version of my application.

Design Process

Photo by Jo Szczepanska on Unsplash

Topic Research

I talked with young people, ranging from age 10–16, and spoke to an older audience to understand their AR and Educational apps experience.

Did they ever use any AR apps?

Almost all of the people I talked with actually never used any AR application. Some of them did know what AR means because they have seen cool videos on LinkedIn.
But that’s about it.

What type of Educational apps they have used

Younger people had more experience using Educational apps, thanks to COVID, with most of them searching and learning through mobile apps. Still, again, this was their school learning — which has no context of learning through AR.

But nothing from another end of the spectrum, most of them know about some top-rated education apps, thanks to regular TV Advertisements, but never used it. For example:

  • Byju’s
  • WhiteHat Jr.
  • Unacademy
  • Udacity

Target user

Photo by Austin Distel on Unsplash

Since I am targeting the Educational AR app domain, I wanted the students who are learning new things and want to experience them in a new field.

So I think users between the age of 10–16 years fit right in. Since this is a free application, less privileged children can have the same experience as all others.

Design Principles

Photo by Patrick Perkins on Unsplash

After all the research and interviews, I developed certain design principles that I need to use while designing and developing.

1. Easy to Understand

Application flow and its content on the application, anything that goes in the application, buttons, everything to be easy to understand.

2. Simple

Animation and anything that requires user interaction needs to be simple. The users need not learn any new patterns.

3. Encourage Movement

Since it is an AR application, it needs to encourage users to move and see its content from all sides.

Ideation Process

Photo by Amélie Mourichon on Unsplash

Based on the design principles I’ve devised for my application UI, I wanted to keep UI as clean as possible, not cluttering it with any useless information, and just wanted to focus on the essentials.

I made a list of features I wanted to include in the app to write the back-end and supported UI.

  1. A Start Screen
  2. Visual marking of Plane tracking
  3. Screen-by-Screen Navigation
  4. Ability to change between multiple 3D Models — either by swipe or buttons
  5. Name of the current planet and related information about it on screen.
  6. Details information about each planet where the user clicked on it.
  7. Extra Information section

Again, these were the essential features that I wanted to include from the beginning.
I discovered more features to be included when I went for the first iteration of the application design.

I wanted to create a modular and expandable system that can go above the current scope and include everything I wanted.
Not just planets, but anything that can be useful for people.

Tools of Trade

  • Whiteboard and Markers — For Ideation and High-level conception
  • Adobe XD — For mockups and high-fidelity designs
  • Adobe Photoshop — For creating required icons and assets

Wireframing and High-level Conceptualising

Concepts and High-level flows

Since I was the sole member of my team, I wanted to kick-off with the actual development to verify what works and what’s not possible.

But still, to get clarity on what exactly needs to do, It was essential to get a high-level concept ready so that I can plan the development of the same excellently and sequentially.

So I created the above-shown application flow and required screens I need to add to make a flow user-ready.

It’s always a good thing to see and understand the significant flow beforehand to see the missing small parts and plan accordingly.

High-Fidelity Wireframing

Adobe XD’s Wireframe

Using my trusty Adobe XD, I created high-fidelity mockups of the screens and set a suitable visual communication.

I wanted to keep the interface clean and give it more of a professional look. So I tried to encapsulate the same thoughts in my interface.
Interactable components must be visually apparent to the user, but placement must still fit in the required Interaction Pattern.

Of course, This was the first time I was designing an AR interface, so I was researching and seeing what makes sense, and
One assumption I did make that since It is still a Mobile Application at its core, Mobile UI Design Principles must stand correct.

Now, how true they are, and if the component placement is good or not, that depends solely on my following User interviews and their candid feedback.

Typography

Since the Application is text-heavy, going by the content, Typography needs to reflect that.
I tried many variations of Font combinations to see what will suit best.

In the end, I’ve chosen Bebas Neue — for headings and Lato — for content.

Initial Feedback

Photo by UX Indonesia on Unsplash

After completing the high-fidelity design, I took the designs to show them to the people, get their initial thoughts and check if I’ve missed something crucial — either UI or functionality wise.

Interviews involving AR Prototypes are a challenge in itself because you are asking users to visualize something that is not even there and give their thoughts on it.

After many hits and trials, I managed to get some initial feedback from users. I understood that some major functionalities are still missing and could help users even more if included.
Some of those missing functionalities were:

  • Ability to scale 3D models — using Pinch and Zoom
  • Ability to set the position of the 3D Model so that users don't have to move and still enjoy the AR experience.
  • An explicit Close button to quit AR experience
  • Little animations that showcase what type of motion is required from the user during the Plane-tracking stage.
  • Inclusion of Wikipedia links of the information so that interested users can go to Wiki and learn even more.
  • Multi-language support (not everyone knows English)
  • More visible buttons that work great both on light and dark rooms.

As you can see, there are some crucial functionalities mentioned that I’ve failed to include in my initial plan.

That is why taking honest feedback on your designs and their required functionality is one of the best things a designer can do to validate its design.

So my task list grows a little bigger to accommodate all of these required features.

Development Process

Unreal Engine 4’s Dev console

Before starting with the development phase, one of the challenges I took upon myself is to develop a fully-functioning AR Application using Unreal Engine 4 and ARCore.

Tools of Trade

  • Unreal Engine 4 Blueprints — For backend scripting and creating core logic.
  • Unreal Engine 4 UMG — For creating UI of the application and designing UI logic.
  • Google ARCore APIs— AR functionality and support.
  • Blender 3D — For creating 3D models of the solar system and Splashscreen
  • Patience — A lot of it

The Naming

Naming this application was also one of the most challenging tasks to undertake. Choosing a name from so many options is a daunting task.

Luckily, My Girlfriend, was happy to undertake this challenge.

There were a few criteria I wanted to have for naming this application:

  1. “AR” must be suffixed with the actual name to provide an easy check that it is an AR app.
  2. It must be related to space.
  3. The name should show that there is more to this app than just space.
    Hence, making it open to other things.

She was the one who suggested the name SpaceNest. She theorized this as

“Nest” word gives us an understanding that this application is open to anything and everything. As long as it is part of our Space, It can be a part of this app as well.

I loved this reasoning, So we finally decided on this application SpaceNest AR.

Creating 3D Models

3D Models using Blender

The scope of this project, till the first release, was defined only to create Planets, Sun, and Asteroid belts. So models, textures for all of these needed to develop, and another limitation I’ve put on is to reuse the assets as much as possible to avoid any overhead on the app.

I’ve chosen Blender because I have a lot of experience with it, and it's a pretty cool FREE tool as well.

Planets, Sun — both are essentially a single sphere, with different textures applied to them, hence using a single 3D model, I’ve created an entire Solar System.
I’ve used NASA’s high-quality textures and post-processing these materials inside Unreal Material Editor for the Textures of said models. For example, Emission and Glow for Sun and Bumps, Crevices of Moon and Cloud Layer of Earth, etc.

For Asteroid Belt, I’ve modeled many rocks, scaled them up in different shapes and sizes, and replicated them in a circular belt format to use them properly inside Unreal.
Textures for these rocks were created using Blender Material Nodes and exported directly into Unreal.

All these models' scale is kept 1:1 for exporting into Unreal so that any transformation or scaling changes are done solely in Blender, and its effects are rendered in Unreal without any change in Unreal Code.

Creating Icons and Assets

Assets created in Adobe XD and Photoshop

Through my mockups, I was able to get a finite list of assets that I need to create before starting with the UI.
I used Adobe Photoshop and Adobe XD to create icons and background images.

I wanted to use SVGs for my icons on UI, as they are the best whenever scalability is concerned, but for some reason, UE4 doesn’t have SVG support, that or I wasn’t able to figure out.
So I choose to export my icons as square PNGs — 512px.

Background image for my app’s landing page was a tricky thing because I thought it would easy to create a scalable 9-patch image, extensively used by Android, and let it do its work on android.
But again, support for the 9-patch image was missing from UE4, so I had to create a JPG image and readjust inside UE4 UI to make it suitably scalable.

Creating Back-end

Unreal’s Blueprint scripting

The entire back-end of this application is written using Unreal Blueprints(BP). I used to work with BP from time to time, but I’ve never worked with BP in this capacity as a solo developer.

Now that 3D Models are ready, I’ve imported them in Unreal and started working on creating all the application's required functionalities that tie directly with the 3D Models.

One of the main goals for working with Backend I’ve kept is making it as Modular as possible, checking out the redundancy, maintaining a standard functionality, and keeping the code as clean as possible.

Unreal provided a basic AR template to get started, which gave me a good chance to familiarize myself with AR level code and then build upon it.
I am not a seasoned programmer or a developer, but I do have a firm grasp of concepts, and I love experimentation, so that is what I did.

I started reading Unreal and Unity APIs, ARCore documentation, and blogs and visiting forums to understand what everything means and how I can use it to create my functionality.

I made a Notion board list down what I have to do to complete this app; the task list ranges from Back-end to Front-end to enhancements, bug list, and future upgrades.
It helped me understand what tasks are pending and what else is on the table.

Before each weekend, I opened this Notion board, pick out the task that I need to do this weekend, and
I started my development process by brute-forcing it, and making the functionality, and check if it worked or not. If it did work, I recycle it, again and again, to identify redundancy and then compress that into a library that included a commonly available function to call this function for any purpose in future development cycles.

With months of reiterating, debugging, regular testing, and experimentation, I completed almost all of the required tasks available on the Notion board to the best of my abilities.

Some features such as Swiping and Gesture-based interactions proved very hard for me to create. I could not understand how to approach it, so I’ve put that on hold, but I provided alternate functionality using range-sliders to accommodate that.

Though many bugs are still present, I’ve noted down all of them, and I started fixing them each weekend and started publishing regular updates with those fixes.
And try to learn and create missing features as each day I learn something new.

Creating Front-end

Unreal Engine 4’s UMG Editor

Unreal Engine 4 has an in-built UMG designing tool, which solves my Front-end creation problems.

But again, though it is a very nifty addition in UE4, there are specific challenges associated with it as well:

  1. I’ve never used the UMG tool to create front-ends in UE4.
  2. Translating my Adobe XD mockups into an actual UI
  3. Make the UI responsive enough to fit any AR-enabled device of various form-factors
  4. Tie all front-end to its equivalent back-end functionality and seamlessly integrate it.

And once again, I found myself on Unreal’s Documentation on UMG, where I read the entire thing from top to bottom and made quick notes, and understood where to use what.
But as to all things new, You can only learn by experimenting and reading about it regularly, so that is what I did.

After a prolonged session of experimentation and reiterating designs as I go along, Finally, I started to get a sense of what to do to make this UI feasible.

I started by breaking down my design into logical UI grid pieces, seeing what components need to be placed to make the interaction feasible and easy.

Then I used UMG’s grid system to structure my layout and place the necessary components.
There are many options and features available what we can use to make our UI; it just needs an exploration.

After placing the required components, I created their associated variables, which gave me access to connect to the backend and show the values on the front-end.
It took me a while to get the hang of this to-and-fro with Back-end content and associate with Front-end variables. Once I did get its hang, I finally realized the UI I designed in Adobe XD.

Another part that I learned to do in UMG is to use Front-end animation tools, I wanted to create motion on the screen to signify what type of action is required from the User, so I needed to learn this particular skill. Frankly, It wasn’t complicated as I thought it would be, so I managed to create the motions on the screen as well.

After creating each screen, I tested it on multiple devices to make sure it is scaling as I wanted it to be.
Thanks to Unreal’s excellent guide to scaling responsive UIs and minor adjustments, I created scalable UI.

All in all, creating this front-end, working with grid-layouts, building components and their associated variables, merging these variables with their back-end counterpart, and creating animations.
I have to say, at the end of my front-end development cycle, I felt confident with what I’ve achieved in such a small span. Though it was not perfect by any means, I managed to create exactly what I wanted to make.

Creating Splashscreen

Movie creation using Blender 3D

I did not expect to do this when I started with this project, and I had no idea how to approach it.
But as for all things I did in this project, I took this as a challenge to do this to the best of my abilities.

First, I tried using Adobe Premiere Pro. Still, as many of you have already used it before, Adobe Premiere Pro is not an easy tool to get familiar with, and that too for a person who has absolutely nothing to do with Video-editing.
So after experimenting with it for a while, I just gave up.

Then, after some rigorous thinking and walking, it just came to my mind that there is another tool that I can use to make a video like this,
It may not be an optimal choice, but I have experience with this tool because I spent some time creating my 3D Models — Blender.

Yes, Blender has built-in support to create complex animations, movies and render them like we used to do in Premiere Pro.

So, thanks to my earlier made 3D Model of earth and textures; I created an animation segment of the world rotating in Blender and,
converted text “SpaceNest AR” into 3D models, using blender’s plugin, designing another animation segment, and overlaid this on previously created animation segments.

And finally, I rendered them in an mp4 format. Boom, We finally created our splash screen in less than 90 mins.

Publishing on Google Play Store

Photo by Brett Jordan on Unsplash

Publishing your app on the play store was not the straight-forward process I was expecting.

  1. I didn’t know that you have to register and pay a fee to access the developer console.
  2. Google developer console was going through a significant revamp and are aggressively pushing their new UI, which is well and good,
    The issue was — All the tutorials and documentation available on the net were based on Old UI, and there were some profound changes in UI, both in structure and content placement.
    Again, I had to find a way to bridge the gap between old tutorials and the new UI.
  3. There are multi-level steps that an App needs to go through to get published in Google Playstore, and there’s no way going around it.
  4. APK needs to be created correctly, precisely the correct Digital signature for both 32 and 64bit systems. Otherwise, you can fail the Google APK analysis, which is a mandatory first step in publishing your app.
  5. No one in my circle had experience in this domain.

So after carefully reading all Unreal documentation on packaging Android APK, I signed up for Developer console and started exploring the UI to get the hang of it.

When I first uploaded my APK with a correct digital signature, I didn’t know that that signature file is permanently associated with this project. It cannot be changed/deleted. If we do that and try to upload another APK, Google will treat that APK as a separate project.
And I deleted that file, so I had to start the entire analysis process again.

For the second time, I took special care in doing all of the things, and finally packaged and uploaded APK for both 32 and 64 bit systems and left for Google analysis.

After 56 hours (it was supposed to be 48hours), App went live for Internal testing, subsequently for Beta Testers, then Open Testing, and after three days of testing,

SpaceNest AR finally went live on Google Playstore.

Design Feedback

Photo by Daoud Abismail on Unsplash

The application was received with good reviews and reactions from people who discovered the application either by my LinkedIn posts or by natural search.

Even though the functionality provided was pretty rudimentary, people could use it with no hiccups at all.

As pointed out by many people,

  • they found UI pretty clean and scalable on various devices,
  • interactions intuitive yet straightforward, and
  • the overall flow of the application very easy to understand.

As a UX Designer myself, who is experimenting and learning interaction and design patterns in an entirely new domain, this feedback feels like a straight-win.

But no product ever launches with zero issues, and my app was no exception as well.
People encountered many bugs related to functionality and pointed out an absolute need for added functionalities and gestures, to which I agree.
There’s also a lack of content, which can be added to explore various domains.

I managed to showcase the application to a group of children and explore the app as they wanted to.
Almost all of them were able to start the app with no issues and spawn the planets.
Some interactions felt confusing for them, specifically, the clicking on the planet to get information interaction.

I repeated the same process with a group of the aged population, and again, the patterns were straightforward for them to understand. Still, they did raise accessibility concerns, which was much-needed feedback.

That helped me confirm my hypothesis and gain valuable insights from this activity and the entire project.

Conclusion

If you are reading this, then most probably it means you’ve skimmed through most of the content, and I don’t blame you; it was a pretty long article.

I wanted to learn and create designs in a new medium (AR) and see what insights I can get through it.

Creating Design

I read many blogs and articles. I went through multiple design iterations, got feedback from various samples, and then iterated upon the designs.

Creating Product

After finishing them, I started creating this application in Unreal Engine 4, which included:

  • Creating Front-end
  • Creating Back-end
  • Creating 3D Models
  • Combining all things into each other
  • Release the Application

After Release

And after release, I went through another round of product feedbacks and interviews more samples to understand the UX of AR applications.

And I released subsequent releases to address any bugs encountered and laying the groundwork for future releases.

Learnings

I’ve compiled all the gathered insights from all sets of interviews into an easy access Medium article, which you all can go through, to understand user behavior and create a good UX of AR apps.

It’s called: UX Insights for AR apps.

What can I do better?

Future Enhancement Flows

Though I was thrilled to complete the initial project scope, there’s still much work left to do.
With so many bugs and missing features to address, I started reworking on things, fixing bugs one by one, and releasing updates each week.

During the entire development process of the application, I learned many things and did many things for the first time, and
I feel that there’s so much I can add to this app and make it even better and more accessible to everyone.
And that is what I am doing, focusing on adding more content and making this application even better and exploring this AR domain.

If you still haven’t downloaded SpaceNest AR, well, let me plug in the application once more.

And that marks the end of this Product study. If you liked what you read, then Clap it.
And If you want to discuss this application or anything about design in general, you can find me posting new things on my LinkedIn. And you can view my other work on my website.

Ankit Passi

--

--

Ankit Passi
NYC Design

I write about product design, design reviews and UX of Videogames.