Edit Reality Together

Announcing our $10.5M in funding, and sneak previews of what’s to come…

Do you remember the first time you were transported to a different world by a story? For me, that magical moment was while watching Spirited Away — my first Miyazaki film. For days afterward, I half-expected to see spirits and talking animals outside my window.

As time wears on, that feeling of reality being altered fades, with the everyday pull of life and email often yanking us out of these waking dreams.

What if we could edit reality — and give the storyworlds that we love a more persistent presence in our daily routines with friends and family?

We’ve been hard at work over the last year building a new kind of camera that doesn’t let you just capture the world…but a camera that let’s you edit reality together with the people you care about, in physical spaces that matter to you

A camera that lets you turn any physical location into a space for all kinds of shared experiences…whether its turning your office into a basketball court…


…adopting the animals you’ve always wanted from the Savannah…


…capturing memories in place for yourself or others to find in the real world…

…or turning real spaces into persistent, evolving gardens with other people in your city…

Carrot farming outside the SF MoMA…

Under the hood, we achieve this by focusing on four core building blocks — spatial sharing, persistence, intelligence and browsing:

1. Spatial Sharing — Using a combination of computer vision and sensors already on your phone, we allow your device to understand the geometric structure of any indoor and outdoor space. This allows you to sync up your reality with anyone else in a single global coordinate system, and interact with each other through your camera in real time.

2. Spatial persistence — Objects in the Ubiquity universe inhabit both space and time, allowing physical world primitives like life, death and scarcity to become attributes that digital entities can inherit.

3. Intelligence — To truly edit reality, computers need to understand the world semantically, the ways humans do. Using a combination of traditional and more novel methods including deep learning for three dimensional inputs, we allow digital objects, like your rabbit avatar, to respect the laws of the real world, like the boundaries and physics of where your bed is.

4. Browsing — Putting it all together, in a way that works for anyone, anywhere, in AR, VR, MR or in a browser, is an important part of our vision. This means designing a tech stack that pushes the limits of traditional rendering and authoring engines. We’ll be talking about this more in coming months.

Allowing anybody with a smartphone to edit reality is ultimately a way for us to achieve our mission — to unlock entirely new ways for people to connect in the physical spaces they care about.

Today, we’re excited to announce that we’ve raised $10.5M from some of the best people in entertainment and technology who believe in our vision — led by Index Ventures, with First Round Capital, Kleiner Perkins, Gradient Ventures (Google’s new AI fund), LDVP, A+E and WndrCo signing up for the journey.

If you’re interested in some of the underlying technology we’re using to make all this possible, including real time 3D mapping, photogrammetry, multi-user localization, and deep learning for semantic segmentation just to name a few, you can stay updated with our blog here.

Our small but fast growing team is looking for more dreamers to join us on our journey….if you’re one of them, please talk to us!

If you’d like to get early access to Ubiquity for iOS, Android and VR please sign up here.

Until we meet!

Cofounder and CEO, Ubiquity6

*all demo videos were shot using the Ubiquity app for iOS