Here is a quick list, based on a Quora query, on ways that live-action virtual reality can be recorded:

It depends:

  • Photogrammetry
  • Videogrammetry
  • LiDar (Light + Radar = a data cloud that can be converted into a mesh, retopologized for efficiency, and the light gathers bracketed exposures for HDR, so one may apply high-resolution texture maps with proper shading properties (metallic, matte, reflect an example, etc.)
  • Cylindrical camera arrays (if you want stereoscopy [“3D” for the person watching], then you need an array with enough cameras to overlap by ~45 degrees for decent stitching (choices!); overlap can be with more cameras, and/or via “fisheye” lenses (each choice has props/cons)
  • Stitching software for multiple cameras, ideally with the ability to control for the warping introduced by fisheye lenses
  • There are camera arrays that are not cylindrical, again, each has pros and cons, and would be determined by budget and/or the content to be shot
  • Alternatives to camera arrays that look “outwards,” are ones that involve a mirror(s) — these can be problematic due to flares, dust/”schmutz” (yes, a technical term), but they try to solve the “parallax problem” by making the cameras face “inwards” to the mirror, by making a single nodal point. This has its own distortion problems, but again, it just depends on what one is trying to capture.
  • Ideally shooting a minimum of 90 FPS, which is the same as video game engines (imperative if one plans to mix the two, unless (perhaps, depends) one is doing “volumetric” capture with humans, such as what startup darling 8i (http://8i.com/) achieves with videogrammetry on a chromakey soundstage (videogrammetry is a type of photogrammetry + the dimension of time, useful for capturing humans in motion)
  • Depending on how one shoots and the content in question, it might be necessary and/or desirable to digitally replace sections of the footage shot, using another mini-rig adjacent to the main rig, or the main rig in a slight offset position, to do additional “passes” or clean “plate.” The reason? The VR camera shoots in all directions, so it will capture a portion of itself, and/or its supporting rig(s), including (possibly) the crew. If shooting in stereoscopic mode, this is ***insanely ***technical (not as difficult in monoscopic, aka 360 Video) and one should budget to have a really good, high-end compositor on your team — or just deal with the camera (or portions thereof) in the shot at all times.
  • An insane amount of HDD storage, as in “up to 11.”
  • Depending on what type of VR one is doing, how much, at what resolution, etc., you’re likely going to involve a render farm, but it’s becoming more cost/speed efficient to execute both cloud storage and cloud computation (for all of the video stitching, correction, warping, codec manipulation, deliverable, et al)
  • You’ll also want to properly capture 360/spatial audio for a better sense of immersion (microphones and codecs vary, every bit as much as the cameras, and mixing software for this is also in nascent stages — it is ***not ***the same as surround sound)
  • All of the above are by no means an extensive nor exhaustive list, but I hope it’s enough to get you started!
  • Good luck, and remember to experiment always in this medium (where time and budgets permit), and to have fun!
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.