About “Mixed Reality” (part 2)
Another use case for mixing up reality
In part one I reviewed how “Mixed Reality” is being used to describe several use cases, one being the use of a green screen for creating a view of the person experiencing VR from within the virtual world.
That use case is supported with the SteamVR plugin for Unity without requiring code but I also provided some workaround code if you’re unable to obtain a third controller (see part one).
Another use case for MR or technically AV (Augmented Virtuality) is the use of the front facing camera on the Vive:
As first introduced at CES earlier this year, when you enable the camera you can see a threshold based contour of what the camera is seeing. The initial demo included placing a real physical chair in your play space so that you would be able to sit down without taking off the headset.
The front facing camera points downward about 10 degrees and can be used to show a contour outline of what’s beyond the chaperone borders, the grid that keeps you from walking beyond your play space shown above left. You can also display this on demand when you double press the system button if you enabled it in settings. A single press on the system button displays the dashboard and a full color unprocessed camera view attached to one of your controllers so that you can hold it up to see a live camera view (above right). But don’t call it a “pass-through” camera, just call it the front facing camera or “tracked camera”.
Adding front facing camera support to your Unity app
Recently Valve released support for the front facing camera to developers via the OpenVR APIs (the underpinning of the SteamVR plugin for Unity) with example code for C++ developers. Here’s I’ll present C# code for Unity developers.
Although the sample code presented here (based on the OpenVR example) may be construed as a “pass-through” view, it really can be much more if you consider adding something like the popular open source computer vision library, OpenCV. You can find a 3rd party Unity plugin or C# implementations (e.g. opencvsharp) with sample code.
With OpenCV you can add the threshold contour outline feature in addition to the many features OpenCV provides from face detection to motion detection. Potentially you could detect your cat walking into your play space.
One example could be the ability to turn on the camera only during a setup scene to match a real world object to a corresponding virtual object or by using depth information to create a model to match the real one. Using the chair example, after initial alignment you should be able to sit down on your virtual chair unless you physically move the real chair afterwards.
Building upon this, once you have it mapped out you can turn off the camera and pick up both the real and virtual chairs (or maybe try a cup) while holding a controller at the same time in a specific position to maintain a one to one mapping. It’s at this juncture where you can begin adding to presence with the sense of touch.
Augmenting virtuality with reality as a form of hyperreality.
Sample code walkthrough
When I use the system menu to call up the dashboard I found myself not wanting to lift my controller to squint at a small window to see the real time unprocessed camera view. So for the Unity version of the sample project I added a larger camera view and attached it to the HMD as a heads-up display (camview in the scene above). I can call it up inside my own application using the app menu button on either controller as well as to dismiss it by enabling or disabling the camview object. Attached to camview is TrackedCameraScript.cs which will handle displaying the live camera feed and required lifecycle.
Here’s the entire API for the front facing camera (Tracked Camera):
The key to using the front facing camera API in C# is by looking at openvr_api.cs (found under the Plugins folder in SteamVR). This wrapper script is generated when there’s new APIs added to OpenVR and maps to the underlying native APIs.
Note: currently Unity 5.4 is required (which is still in beta).
Let’s step through the relevant parts of TrackedCameraScript.cs skipping error checking and setup and teardown (acquiring and releasing the video stream service).
quick pseudo code:
- In Start() (setup, called when camview is enabled), obtain camera frame dimensions and buffer size, allocate buffers (source and destination) and create the 2D texture for your camview.
- In Update() (called on each frame), obtain the camera’s frame buffer including header info (to check if there’s a new camera frame) and copy the buffer to the texture and apply it to camview.
Note: you can obtain the ViveTrackedCamera Unity project on github.
Start():
Update():
Since the same method GetVideoStreamFrameBuffer, obtains both the header and the image buffer we need to call it twice. The first time (above) just to obtain the header (hence the null for the image buffer). Then check the camera frame’s sequence id from the header. If the frame is not a new one (we saved the previous id) we exit to avoid an unnecessary copy.
The second time we call it we save the current sequence id and then copy the raw buffer data into the texture for display.
Wrapping up
The only other script in this project is the TrackedControllerScript which I attached to both controllers which is just for handling the app menu button click. It’s boiler plate code (based on SteamVR_TrackedController.cs). I’ll cover abstracting input in a future post.
There’s a possible performance penalty of having a real camera in your scene (perhaps the reason why it’s disabled by default) due to copying each frame from the CPU to the GPU. It should be used with discretion keeping an eye on the frame rate. I would not recommend building augmented reality applications as this camera isn’t meant for that. In the other extreme don’t just use it as a “pass-through” camera as it can be much more with some imagination. Hopefully this code will help you start using the already available SteamVR APIs for the front facing camera in Unity.
Note: to keep this post short I’ll revisit the green screen use case in a future post.