Experimenting with spatial communication — Dev Log

Samuel Tate
The PHORIA Project
Published in
8 min readNov 15, 2018

At PHORIA, we think a lot about how communication will look with spatial computing. We ask, what will the vocabulary of the future be? We think that people won’t just communicate with words or images, but whole environments. We’ll edit our street to welcome a guest, we’ll leave our most precious memories as snapshots where they happened, like love hearts carved on trees.

From Keiichi Matsuda’s Hyper Reality

For this project, our brief was to capture the client’s conference theme: ‘the future of communication’. To bring this to life, we wanted to create a shared AR experience that could be a little taste of what a mixed reality future would look like. We also wanted to have a bit of fun. Imagine if we could hang emojis on every street corner, or start message threads hanging in our favourite cafes? We only had a month to pull it off, but it’s a space we’ve been working pretty hard in lately, so we figured — why not?

Trending AR VR Articles:

1. Ready Player One : How Close Are We?

2. Challenges of AR Market in early 2018

3. Augmented Reality — with React-Native

4. Expert View: 3 ways VR is transforming Learning & Development

The tech stack — syrup and all

We’d just come off the back of creating our digital twin application, which uses image targets to align building information to a structure’s interior. Prior to that, we’d created shareVOX, a shared AR art tool, demoed at Google IO. We’d also been experimenting with Poly integration, uploading and downloading Poly models directly to our apps. We do all our work in Unity, as it allows for cross-platform support and integrates with most XR packages nicely. We decided to combine a few elements from each, to create a simple stack for shared AR communication.

Making it feel good

Even if you know what you’re doing, draw some diagrams

One thing we learned from our shareVOX sprint was that the better an app feels to use, the more comfortable people are experimenting. This is especially true in spatial applications, where we have to translate interaction expectations into a whole new medium. We started with object manipulation widgets, little tools on the selected emoji that let you drag, scale or rotate. This was based on Unity’s transform system. But watching people use their phones we realised that everyone expected that you could drag and pinch to move around. A trick we learned from Owlchemy is to watch what people try and do, and then make the app do that.

Some raycasts, finger tracking, and gesture recognition later, we had a system that let us drag our object around the environment, pinch to rotate and scale. While we didn’t have a lot of time to add juice to the project, we did add a nice code driven bounce, that meant we could randomise the way emojis entered the scene, giving them a bit of life. Nothing beats maths to juice things up.

We ended up using our own ‘emojis’ instead of downloading live from Google Poly. While having access to a limitless user-created 3D database is amazing, we wanted to ensure a consistent art style. We actually went through a few different styles and shaders, and landed on something that was approachable, recognisable and friendly looking.

An initial concept, that was deemed 2 spooky
Evolution of a shader — we landed in the middle

While we think the future will involve uploading/remixing and editing content from databases like Poly, for this job, pre-packaged assets made the most sense.

Deal with it

Line it up to make it real

For shareVOX we used ARCore’s Cloud Anchors, which make a scan of the space that other phones can use as an anchor. They are an amazing tool for shared AR, but can be a bit unreliable, especially when users aren’t familiar with scanning. So for this, we used ARKit’s local anchor — this is the iOS version that maps the space locally. Our neat little trick was to use an image target to trigger and align the experience. This meant that everyone saw the same objects in the same space without the complexity of cloud anchors. Yuri Noval Harari talks about the idea of intersubjective reality, where shared ideas become real. When you share a digital object, it becomes something people see as a very real thing.

Multiplayer for dummies

In spitballing the idea we realised we’d essentially promised a ten-player game by the end of the month. To achieve this without creating the next Fortnight, we used Firebase. It’s a live database that lets you write and read to it in real time. It also broadcasts the changes you make as events that you can subscribe to. We passed update and position changes to the database, which broadcasted them to other apps. We had to solve lots of time travel paradoxes because it was asynchronous. What if one device passes a change to a system, but another device passes a contradictory change before it gets uploaded? Chaos, that’s what.

We only really got to properly test multiplayer a few days before the deadline. We’d hired 10 iPad pros to ensure a good experience but this limited our ability to test during development. We’d also had to spend a lot of time getting the final decal completed (it’s gotta look good and act as a good marker). With all the pieces in play, we found hundreds of megabytes were getting uploaded in a session. To add to this, everything was going crazy when we were using the image target. This was because our alignment system was ‘moving’ everything at once, so every device was broadcasting and receiving every object’s position every frame. This was a few days before the conference and we had to face the fact that objects were disappearing, vibrating and not behaving themselves. Thankfully our tech lead and time lord created a system that handled this logic. We created local systems that would reconcile multiple timelines, so when an emoji moved, it stayed there.

Interface and aesthetic

With the original concept, we wanted to play around with notes of retro-futurism, and cheesy nineties sitcom aesthetics (weird mix, but bear with us). This was because while we were playing with the future of spatial messaging, we knew it wouldn’t just be text messages hanging in the sky, so we wanted to be a bit tongue in cheek.

I’ll be there for you

We loved this style, but in the end decided to use a more neutral palette, so the messages were recognisable and easy to parse. We didn’t want the brand to get in the way of the experience. Thankfully we’ve been experimenting with separating our display and data management layer, so we just had to feed our UI system different styled UI components, and the adaptive display took care of the rest. To make the messages scale we had to account for world space size and offset the canvas appropriately to get those messages scrolling and bouncing. But if we can’t obsess over tiny UI details as the timeline looms, what’s the point?

Connecting AR to SMS

The client is one of the main providers of SMS infrastructure to businesses, and we wanted to experiment with their API. With a few lines of javascript, any system can send SMS, with metadata, and all kinds of other bells and whistles. We experimented with having a user notified when someone replied, bringing them back to the AR chat room. We then extended this by making it possible for a user to reply via SMS and have it update in the AR Chat room. Aside from it being cool seeing an SMS pop up as an AR bubble, we’re always looking for ways to leverage established infrastructure. We know how to bring AR to life in people’s phones, and with SMS we can bridge the gap to users that don’t have the app.

The big day

So we got the chance to set up early and watch thought leaders talk about what messaging meant for the future. What really resounded from their talk was that whether SMS, bank transfers, or a 3D world, we’re really just passing messages back and forth, and any way we can build on that experience is a boon to humanity. Then they let 150 developers, salespeople, and customer support agents loose in our AR world. They were split into teams and had to try to experiment with how you’d use this to tell a story as a person or a brand.

What we got was a carpet of gibberish and thousands of emojis, as thick as lice.

However, in the chaos, order started to form. We found, as with all our AR experiences, that we needed to get people moving. This meant they could really start to understand what it means to share the space with AR objects. What we notice with shared AR, is that as soon as people realise it is shared, it becomes real. They start tidying up, stepping around each other's creations, and pointing to things that don’t exist like they are really there.

Finally, and most importantly, little clumps of meaning started to emerge, much like single-celled bacteria clinging together to form life. Little vignettes, funny scenes and secret in-jokes, littered around a microcosm of a city. It really begs the question: when we are able to layer content on our world, what will the chaos look like? And when order does start to form, what story will it tell?

--

--