The Journey to Native VR on a Raspberry Pi

Peyton Howe
5 min readMay 27, 2022

--

Ever since discovering the StereoPi board and an article on a VR headset on the StereoPi, I have been attempting to improve and implement my own version of native VR on a Raspberry Pi.

Initial Setup and Tests

I began my experiments following the examples listed on the Stereopi website, with a slight twist: adding real-time text annotation. I wanted to overlay sensor data to tell the user the temperature, pressure, humidity, altitude, and what direction they were going. I also wanted the user to see the time, date, and CPU, GPU, and RAM usage.

After playing around with the native raspivid application, I realized there was no great way to do real-time text annotation except to add image overlays or use an OpenCV window behind the camera preview to do so. Initial attempts worked, but were buggy due to the closed nature of Raspivid and the camera resolution requirements.

Left: Raspivid Top-Bottom Stereo mode with text annotation handled in an OpenCV window behind it. Right: Raspivid Side-by-side Stereo mode with green bars due to the resolution not being divisible by 32 in order to display text behind it.

There was also an annoying bug where raspivid would freeze randomly when using OpenCV to display text behind the camera feed. I was advised to use PiCamera and MMAL instead of Raspivid since I would have more control over the camera feed and preview window. After a lot of trial and error, I was able to achieve real-time text annotation using PiCamera and Python.

Stereoscopic PiCamera feed with text annotation

Performance was fantastic, 30fps at 1080p with very little latency and real-time text annotation. There was one looming issue: using lenses made the edges of the screen nearly impossible to see, which made the text annotation useless.

Applying Distortion

In order to fix my issue with the lenses and to truly create a VR experience on the Pi, I needed to incorporate one crucial component of VR headsets: barrel distortion. Using barrel distortion allows VR headsets to not lose any visual information due to the optics of the lenses and also keeps straight lines straight.

Barrel distortion seen on normal VR headsets

The issue with applying barrel distortion to the camera feed is that it is impossible using the native applications such as PiCamera or Raspivid. There are image effects that can be applied using both PiCamera and Raspivid, but they are all part of closed-source firmware, and thus there is no way to incorporate new ones into either application.

Some of the image effects available using PiCamera and Raspivid.

After some research, the internet suggested I use OpenCV to apply the distortion. While super easy to implement, performance with this method is abysmal since it relies solely on the ARM CPU. I’m talking under 5fps at 1080p, which would be an absolute nightmare for any VR headset. Upon further investigation, another suggestion was GStreamer, which would allow me to stream the feed to my laptop to apply the distortion.

While this method allowed me to apply the distortion and maintain 30fps at 1080p, I could not stream the feed back to the Pi to display, nor would I want to. Streaming introduces latency, and streaming from the Pi to my laptop to distort, back to the Pi would introduce extremely high latency that would also make the VR headset a nightmare.

Defeated and exhausted out of options, I sought one last-ditch effort, using the GPU on the Pi.

Enter OpenGL

As you may or may not know, the GPU firmware and documentation on the Pi was originally closed source, until Broadcom released the specs in 2014. A dozen or so Github repositories and forums since the release of the documentation linked projects that used OpenGL on the Pi. However, only a handful involved using a camera feed, and even fewer were actually able to compile on the latest version of Rasbian Buster as most repositories were 6+ years old.

After searching for weeks to find an example that would actually compile and form which I could base my code off of I finally found VC4CV by Seneral. Not only did it compile, but it actually used the MMAL -> OpenGL fast-track that I needed for my VR display. Despite the library compiling and running, I still had a lot of work to do to get the VR display working as the library simply displayed the feed from one camera using OpenGL. I also had to teach myself C++, OpenGL and the structure of graphics programming.

Early Attempts at Barrel Distortion using OpenGL shaders

Seneral helped me modify his repository to incorporate barrel distortion using both OpenGL shaders and the OpenGL mesh, as well as adding in my own text annotation function. A Raspberry Pi developer and one of the StereoPi developers helped me to initiate and use two different cameras in OpenGL.

Left: First working Barrel Distortion shader. Right: First working text annotation on both camera feeds

Putting the two together with a lot of trial and error and some overclocking (overclocked the GPU to 400MHz), I was able to get native VR working on the Pi. The Pi natively maintained 30fps at 1080p for a barrel-distorted stereoscopic camera feed that was annotated with sensor data from a heart rate & SO2 monitor and an infrared temperature sensor.

Stereoscopic VR camera feed distorted natively on the Pi using OpenGL

What This Means and Going Forward

Using the GPU on the Raspberry Pi, you can create a native VR display on the Raspberry Pi. You can display data from any sensor or board using a serial connection, or display performance stats of the Pi, such as CPU and GPU usage.

This currently only works on Pi’s with the VideoCore IV GPU that support two cameras, which include the compute module 3, and the Pi 3 using a camera HAT (theoretically, this hasn’t been tested). Here is the link to my GitHub repository, with detailed steps for setup and how to run on your own Pi. Moving forward, I am attempting to port the project over to the Pi 4 with the VideoCore VI in order to improve performance and hopefully implement other features such as object tracking or facial detection.

If you have any ideas or want to help out, you can find me on the Raspberry Pi forums as peytonicmaster and the StereoPi Forums as phowe.

--

--

Peyton Howe

Trying to bring native VR to the Raspberry Pi for advanced VR headsets and heads-up displays.