How We Prototyped Our First AR App in Five Days (Part 1)
In December, we went to Potsdam for five days to create an AR prototype. This is how we went about it. This post is the first of three posts in which I’ll explain how we created our first AR prototype (spoiler alert: here is part two and part three). Our goal was to learn more about the technologies behind ARKit and ARCore, understand potential use cases and to build an app that users can try out.
Thanks to the Hasso Plattner Institute, we had a meeting room, a full set of workshop equipment and enough Mio Mio Mate to last us through long days of brainstorming, designing, developing and testing. The whole team was present and we decided to go for Google’s UX design process methodology, which is based on five phases to build a prototype: Unpack, Sketch, Decide, Prototype, Test.
We started by looking at the opportunities ahead of us. We looked at the current AR market and at some AR examples that are out there right now. We also watched (Andreessen Horowitz’) Benedict Evans’ presentation about the future and the most interesting part for us was him referring to Mixed Reality (MR) and how it is in the prototyping stage right now as opposed to other technologies which are in the stage of shipping commercial products.
Evans layed out two fundamental different approaches when looking at MR:
- Add something to the world
- Look at the world and tell me about it
That very basic distinction was truly helpful as we knew right away that we wanted to go for the second option. We coin this option LATWATMAI (“Look At The World And Tell Me About It”). In other words, we wanted to enable phones to detect/see something (via scanning) and then offering context to that.
Another exercise we went through that day, was to look at AR as a new medium and trying to understand the level of impact AR can have on society. We picked McLuhan’s Law of Media as a guideline and applied this model to AR:
This showed us that with AR, it is all about understanding and learning more about your immediate environment.
Based on that, we went through an exercise of thinking about some high-level problems users are facing with regards to the environment and how AR could help solve these issues:
- Add something to the world (ADD)
- Look at this and tell me about it (LATWATMAI)
- Collaboration ©
Combining these thoughts, we came up with a first user scenario we wanted to look into:
Imagine you are going to London for vacation. You arrive at your rental apartment and you are ready to soak in a different culture and you want to learn a little bit more about your environment. For example, you want to learn a few words in the language of your respective area, in our case it is a German tourist/business person going to an English-speaking country. With our AR app, you can scan objects in your apartment and you can learn more about these objects (e.g. what is the name of this object in the language of the country you are in). You not only get a translation but also a sentence in which the new object is used in (in our prototype, we picked a wikipedia entry but obviously, it can be anything, depending on the model you pick). Naturally, we thought of AirBnB customers as potential customers of an app like that. The funny thing is: The very next day, AirBnB announced that they were working on something similar.
Tomorrow, I’ll talk about our next two phases and show you the first prototype we came up with.
About the author: Linda Rath-Wiggins, PhD, is the co-founder and CEO of Vragments, a Berlin-based VR startup that creates VR experiences in collaboration with newsrooms (e.g., this German VR example with Deutschlandradio Kultur). Vragments is developing a VR product called Fader that allows users to create and publish VR stories easily and fast. Vragments produced a Fader use case in cooperation with the Center for Investigative Reporting.