Jon Brouchoud
Jan 6, 2018 · 3 min read

Design Meeting 1 notes are [here]. Meeting 2 notes are [here]. Below are notes from Meeting 3.

Please Note: This post is one of several in a series where we’re brainstorming the UI/UX overhaul for Immerse Creator — available on Steam in Early Access (here). Immerse Creator is compatible with Oculus Rift CV1, HTC Vive, and basic compatibility with Windows Mixed Reality headsets, but the button mapping still needs some work.

We’re very interested in hearing your input! Please share thoughts below in comments, or on the Community Hub here.

In today’s Immerse UI/UX development meeting, we spent some time discussing implementation and design solutions around 3rd person camera objects users can position in the scene. This will enable users of Immerse Viewer application to select and visit these views. We also discussed issues of scale — mainly, do we scale the avatar so people can be working together on a scene at the same time but at different scales? Or do we let individual containers be the way we handle scaling and develop more fine-tuned controls for how those containers scale.

We discussed the nature of Immerse ‘utility’ objects vs. scene objects. Maybe we graphically define and differentiate utility objects via a consistent visual language — glowy? wireframe shader with glowy?

Should utility objects look like they have some sense of gravity to enhance immersion, or are these purely virtual hovering objects?

  • Maybe we call the viewer Immerse Experience app?
  • Ability to use Immerse Creator for capturing a video podcast, or sharing a build with others
  • 3rd person cameras able to position in the scene
  • Asset Pack for cameras we can add?
  • Projected from a base point to suggest gravity?
  • Free floating with ambiguous energy field
  • Maybe there are Immerse elements that have a different visual language that are more like utilities or gizmos than other classes of objects
  • Should the cameras be customizable? Reskinnable?
  • Immerse system objects can be turned on and off
  • Also opportunity for immersion — showing part of the production showing in the scene
  • Swappable skins for Immerse utilities
  • Like the tape measure — stays persistent in the scene
  • Local grid or world grid?
  • Grid fades on a radius?
  • Do we need a local grid at all?
  • Maybe we scale the avatar instead of the world / container
  • Each user has a lens they see the world through — the manipulation happens first, then a control layer manages scale?
  • We control scales — only allow for 0.25, 0.5, 1, 1.5, 5, 10
  • Do we need molecular scale? Probably not
  • Maybe if we have more careful control of container scale, there wouldn’t be as much need to work at different avatar scales
  • — regarding the craft of having cameras in space


Tales from the virtual frontier.

Jon Brouchoud

Written by

Founder, CEO Arch Virtual. Passionate about using VR and AR to solve real problems, and contribute to positive change in the world.


Tales from the virtual frontier.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade