Vivid AR Portfolio Part I: Spatial Computing

Vivid
vividcoin
Published in
5 min readJan 7, 2019

Our team wanted to give a under the hood peak at the past year of development of the iOS and Android application, Vivid AR Portfolio. Currently in Beta, this application represents the world’s first extension of Cryptocurrency Market Data projecting into the 3D Spatial World we all live in. In today’s article, we are expanding on what defines a Spatial Computer, and how that fits into the broader emerging computing landscape.

One of the earliest Vivid concept images

November 2017: The bulls had full control over the markets, with a froth of sweeping activity, pumps and dumps, moon shots, and a neverending overflow of emotion.

With this contextual landscape established, the second ingredient that would end up being the bedrock of Vivid had been long ideated on: Spatial Computing. What is this you ask? Spatial Computing, Augmented Reality, Extended Reality, and a half dozen other terms simply defines a computing system that understands the physical world, and can integrate digital content into it in real time.

Google Glass: Not Spatial Computing

When defining Spatial Computing, there are other segments of the space. Products such as Google Glass. Glass is a small, single eye’d transparent display that is head locked. Similar to having a see through Apple Watch taped to your head. While many consider this Augmented Reality, the fact that the device does not comprehend the environment or even basic surfaces, means it can not integrate said content into the real world.

Oculus Rift: Not Spatial Computing

Another segment of this emerging market is Virtual Reality. Best highlighted by the recognizable Oculus Rift. With VR immersing you into a digital space users are blocked from the real world, and presented with a fully artificial one. While the 3D interface and input principals are highly relevant, the core fact your local environment is ignored entirely does not qualify VR as Spatial Computing.

Magic Leap: The Caviar of Spatial Computing (As of January 2019)

A true Spatial Computer: As we navigate through the ever changing waters of this emerging computing field, devices such as Microsoft’s HoloLens and Magic Leap’s Magic Leap One are defining the bleeding edge of what makes up a Spatial Computing platform.

One of the major underlying components of this new computing paradigm is the ability to do two things:

  1. Positionally and rotationally track the head (and hands) inside out in real time.
  2. Create a 3D mesh reconstruction of the local environment.

With this in mind, the addition of complex optics, eye tracking, hand tracking, SDKs, and so much more all make up these first generation Spatial Computers.

Magic Leap One’s environmental mesh and example content

Why understand the physical world? Regardless of how we as a species arrived here, our core being is designed to navigate and dynamically understand the physical world around us. We locate items in a space, pick them up, interact with them. We open a door to greet our friend, we sit on a couch and discuss concepts and ideas. These are intuitive and natural interactions that are rooted in the core of our being.

Path to success: Make the digital more like the physical

One of the earliest indicators for translating our digital workflows into a mass market appeal was the introduction of the graphical user interface (GUI). Limited by the capabilites of the hardware at the time, GUIs took the first step in reshaping bits into more recognizable items by our mental model’s standards.

Technology catching up to the vision

However, thanks to the core principles of a Spatial Computer, environmental meshing, and contextual understanding, means those files and folders are moved back into the physical desk in front of you. Completing the cycle of abstraction the original Mac OS designers embarked on.

Apple ARKit: Bridging the gaps

With Spatial Computing well on track to become the next major computing paradigm past PCs and smartphone, there remains several large obstacles. Price, form factor, optical quality, and more are all individual technology verticals that each have a buzzing hive’s worth of development in each space.

While the technology giants and hardware startups the world over battle out these domains, marching towards the inevitable sleek glasses we all desire, there is a very real, and very large market that exists today to start developing Spatial Computing applications for: Your Smartphone.

Thanks to the folks at Apple and Google, ARKit and ARCore, there are two major SDKs that allow developers to quickly prototype and build Spatial Computing applications for mobile devices. We are in an era where nearly 1 Billion devices globally are capable of running primitive Spatial Computing apps. Why primitive? Unlike the higher end devices such as Magic Leap One, these tools only allow basic plane finding on floors and walls. This means you won’t get the higher quality understanding of the environment, such as querying where a chair or coach is, where a large open wall is, all things possible today on higher end devices.

ARKit: Example of primitive plane finding

However even with these limitations, the basic plane finding offered, means that for the first time ever, we now have a widespread computing platform that can in real time layer digital content into users real worlds.

This ends our introduction into the landscape of Spatial Computing and it’s adjacent kin. Our next article will dive deep into the development of Vivid AR Portfolio, and how we leveraged Spatial Computing to empower cryptocurrency traders everywhere.

Vivid AR Portfolio Beta for iOS & Android

Like what you read? Join us on Telegram to ask questions, or tell us how you think Spatial Computing will or won’t be the next computing revolution.

Twitter: https://twitter.com/VividPlatform
Website: https://vividcoin.app/

--

--