IMD Final: Fixed Installation AR

(Anticipating Surveillance Feudalism)

There’s currently a schism forming between Europes GDPR and the standard practices of countries like China. This has led companies like Google to attempt to develop alternative software for alternative politics.

This failed as geopolitical pressure forced Google to not enable what the western world deemed “totalitarian and Orwellian intentions.” Separate laws regarding the collection of data will inevitably lead to different collections of that data, in scope and in type. This, in turn, will lead to increasingly proprietary definitions of these archives of our personal information. Stasis is difficult. Economically, and frankly Darwinianly speaking, the norm is either expansion or contraction. If we are not expanding global access to everyone’s personal data, we will likely experience the demand for privacy at an institutional level, if not a personal one.

We’re still exploring what’s possible with augmented reality. Most of the applications are small in size, and typically these applications focus on placing and manipulating a single object in the world. It doesn’t have to be this way. The potential of Augmented Reality holds the answers to some of the structural and cultural issue’s we’re currently facing.

For my final I wanted to experiment with UX principles and think about what it would take to design a large scale augmented reality experience that could potentially address some of the above. What it would take to use a program like this effectively?

The Problem: Context

Bushra Mahmood

To sum her article up: If a user isn’t able to physically place an object in the world, how is the user supposed to know where and how to interact with the augmented reality experience?

My answer, real world cues.

Where to stand, where to place your phone, did the AR app work as intended? There’s a possible marriage between the digital and the physical so long as we can define the general area where the experience must take place.

I had a number of possible designs to tell users in which direction to place their phone, and handed out some corona safe beers to anyone walking by my place that wanted to try them out. 10 drinks later I had enough information to pick the simplest one. Detailed here:

I first had this idea when I was attempting to design a grocery store service using AR. There’s almost nowhere we go on a regular basis with more physical signage and direction than a grocery store, and because of that context I took for granted how easy it would be to design a large scale AR experience.

Fortunately I had my first project to fall back on: AR narrative design.

The practice at designing a experience that the user isn’t entirely aware of at the start of the app was extremely useful… Because my iPhone would not render the entire scene. I had to walk to it, but more on that later.

The first bit of designing for a specific location is actually getting accurate dimensions of it. I figured the easiest way to do that given the current circumstances was to measure the street I live on.

I used this AR tape measure to do get accurate measurements of each property and where the door was on each property so I would know where to place the address markers.

Because the tape measure measured everything in feet, I converted it to meters (Unity’s standard unit of measurement).

I then created a to scale digital representation of the space.

Which I then tried out in my neighborhood. For one reason or another, it took a while to get it right. Here’s a video of what it the project looked like as it first activated inside a home rather than in the designated spot.

Without a plane finder the way unity creates a scene in AR is it uses camera facing and position as a reference. I realized pretty fast that immediate feedback on how accurately I managed to generate the scene would be important, for me and my users.

Once I had the specs figured out, I could turn of the mesh-renderer on my unity game objects and they would dissapear. My floating address markers remained. You can also see them at night, I might add a picture of that later.

Here’s a video of the basic structure. I’ll get into some interesting problems after.

What I thought would be a simple implementation of a raycast from the center of the camera and and one of 4 activated game objects (transparent, but roughly the size of each property) to activate a canvas with information about each property, was in fact even easier than I thought it would be.

In Vuforia. That seems to be the general consensus of stackoverflow and the unity forums, at any rate. Reference points aren’t planes, and thus it’s difficult to ray-cast to them from far enough away to make my app feasible with the tools I was using, and Vuforia is pretty expensive judging by the contracts I looked at. At any rate, here’s a video explaining pretty much the exact application I was going to use, in 6 minutes.

If I couldn’t create a solution using raycasting, what else could I use?

I created a small, transparent game object attached to the location of the AR camera and had that camera-object activate any of the larger property objects I came into contact with. On contact, the canvas will display the information relevant to a home. If the phone can’t raycast to the mountain, the mountain must come to the phone.

You see the address you want information on, you walk to the property, you see what you want to see. Straightforward… and somewhat unwieldy.

Future Work/Applications

For this particular application:

More UX needs to be done to figure out the right dynamics of displaying information and where the information can be displayed. Perhaps rather than a direct raycast, a controller for a visible game object could be moved around the neighborhood and activated on properties without walking to them.

Work on usability, the interface, and perhaps experimentation in Vuforia.

The average file size for my app is 55 mb, and it’d be smaller once it’s built out and modular. If the future is some sort of segmented cloud, in which various entities own competing rights to things like location data, one can imagine AR address’s being a part of an attempt to replace cloud computed GPS entirely.

Imagine a digital map of an entire city that, having set an accurate origin point, managed to track your location via how far your phone has traveled from that origin point rather than tracking your location via GPS. This program could allow you to navigate anywhere, and while the phone would update it’s city maps, your personal location would be entirely private. Or at least, a secret between you and the company that ran the app.

In general:

A general design system that provides physical context to digital experiences is coming at some point. It was fun, and a bit of a challenge, to work through some of the early hurdles associated with bringing it into the world. The main thing I learned was that immediate feedback on how well the AR experience is represented is key.

Here’s the G-Drive with what you’ve seen in the videos. I’ll update this link with the file for my within AR-Foundation game object fix when I’ve got one that will build to my phone.

I’m looking forward to the stories we’ll be able to tell once we’ve developed this new language.