How we made the Audi AI:ME’s flying VR experience — Part 1

Planning and executing a large-scale aerial Photogrammetry project

Azad Balabanian
Realities.io
9 min readJan 23, 2020

--

Screencap of our data in Reality Capture

This is part one of a three-part series, where we explain the details of some of the large challenges we faced while making this Photogrammetry VR Experience we made for Audi’s CES 2020 demo.

Further reading —

Background

This year at CES 2020, Audi showed off their latest autonomous vehicle (AV) concept, the Audi AI:ME equipped with a VR headset as part of the in-car entertainment system for the passengers.

While wearing the headset, the passengers were transported to soaring over the iconic karst mountains of SE Asia while the AI:ME autonomously drove them to their destination.

In contrast to last year’s CES demo, an action-packed space shooter based around the Avengers developed by Schell games, this year, Audi wanted to explore how relaxing a ride in an autonomous vehicle could become if you could swap a loud and hectic city for a serene, beautiful landscape.

Audi brought us in as experts in VR experiences using Photogrammetry. Our objective was to find and capture beautiful landscapes to create a relaxing VR experience with enough captured material for an autonomous vehicle to be able to move in any direction.

This presented us with an immense challenge: capturing environments that are big enough in every direction for the car to be heading in, while still being interesting and beautiful to look at.

After 12 months of development, we’re happy to be able to finally talk about how we captured the locations and built this experience!

Flythrough trailer of the VR experience

Pre-Production and Production Organization

With the team on-site for 2.5 weeks and lots of unknown and uncontrollable variables, we needed to be super organized and efficient with our time.

We did our best with scouting out potential locations beforehand using maps and satellite imagery, we couldn’t know anything for sure until we were there on foot and with a drone in the air to determine if the sites would work for our needs.

Daily call sheet preparations

Organization!

Working with a local aerial video production team to operate the drones, it was essential that we had a Call Sheet for every working day, teams arranged, locations marked, drivers booked, equipment charged and ready, drone flight plans drawn and saved, as well as emergency contact information and hospital locations accessible.

We started the day with a short meeting over breakfast going over the call sheet and ended the day with a post-mortem meeting going through notes from the day. This also acted as a good record for us to refer to in post-production if there were any questions about what was captured when and by whom.

Data management

The immense amount of data we were capturing came with the difficulty of backing up, labeling, and organizing the data. I set up a system to ensure no capture team was missing any equipment and that we didn’t lose or mislabel any data captured during transfer/backup.

Every night, we backed-up and organized all the data captured (particularly the RAW images), and used the drone-generated JPEG’s in Reality Capture to double-check alignment of certain datasets that were tricky. Any locations that we were unsure about certain elements in the scene (water, thin trees, super hazy images, etc), we also processed the mesh reconstruction overnight to see how well the surface geometry turned out.

Equipment

During pre-production, we considered using a DJI Matrice 600 Pro with the Hasselblad A6D-100C for the ridiculous 100 MP full frame sensor, however, after looking into the details, we realized how much more complicated things get with needing 6 batteries for a 20 min flight time, weighing 3.5 kg together. Not ideal for hiking through mountains.

Instead, we decided using multiple Mavic 2 Pro’s with the 24MP camera, flying closer to the subject can achieve similar if not better results (less atmospheric haze to deal with), lower cost, more safety features (like side collision detection), and overall simpler equipment to travel with.

For storage, we brought along eight SanDisk 256GB U3 V30 A2 microSD cards to have the most minimal buffer time between capturing two images with the Mavic 2. We brought six 1TB Samsung T5 Portable SSD to back up the data captured, as well as a 5TB hard drive as a secondary redundancy drive in-case any of the SSD’s malfunction.

Every capture team had one Android phone with the DJI Go 4 app (necessary for operating DJI products) and Drone Harmony (used for automated flight path generation).

Each team was also equipped with a 10,000 mAh USB battery bank to charge the phone as well as the Mavic 2’s controller (which runs out of battery way quicker than expected, so this was super crucial).

Photogrammetry Capturing Challenges

There were a few major challenges from a Photogrammetry perspective that we dealt with that we’ll explain here.

Capturing locations that are detailed yet large enough for a VR experience

It’s quite easy to capture large locations if you fly high enough… but the higher you capture from, the less pixels/meter you have.

Given that this would be a 1:1 scale VR experience, we needed the amount of detail that we’re known for providing (as seen in our free VR app — Realities). This meant that we needed to fly low to capture a ton of detail, as well as fly high to capture a large area as well as a large surrounding (to have parallax while moving through the scene, instead of just a flat skybox).

Screencap from within Reality Capture

Automated flight patterns and capturing dense vegetation

Typically, capturing vegetation is the opposite of an ideal subject for Photogrammetry, particularly because its branches and leaves produce high-frequency detail, which require way more images to cleanly reconstruct than we could dedicate considering the scale of the environments we wanted to capture.

We knew the locations we wanted to capture were full of dense vegetation, including tons of bamboo trees.

In our pre-production research, we tested different flight patterns to see how dense and sparse vegetation reconstruct.

The typical “lawn-mower” like pattern that’s common for drone photogrammetry, with orthogonal images shooting straight down works fine to create orthogonal photographs for map-making, but for a VR experience where you’d be facing the 1:1 environment head-on, does not produce great results.

poor reconstruction of the vegetation with orthogonal images

We discovered that overlapping, inside-facing orbits with multiple camera pitch angles and large enough radii produced great results that easily aligned and produced good stereo pairs for depth map generation.

larger orbits produced much better vegetation reconstruction

With enough iteration, we established our general drone capturing protocol to build all our flight plans with, which worked predictably when handing off the task of capturing to a drone operator without any photogrammetry experience. Having basic drone and camera operation knowledge was necessary, but the fact that we didn’t need to teach the drone operators the basics of photogrammetry gave us more time to capture on-site.

screenshot from our pre-production research log

What’s funny is that these concentric orbits ended up looking similar enough to the Audi logo, which made for nice screenshots of our dataset (like this article’s header image).

Screenshot from Drone Harmony

What was great about these automated drone flight plans was their repeatability and predictability, which led to a more consistent data collection.

Although the flying and capture is automated, there is still a ton to manage as a drone operator, including:

  • adjusting exposure to reduce blown out highlights and dark shadows
  • adjusting gimbal pitch to maximize capturing field of view from one orbit
  • checking on battery levels and calculating when to fly the drone back (DJI’s calculations don’t account for flying upwind. Import for when the drone mission is a KM or two away).
  • checking the autofocus (which tended to need get stuck on certain foci)
  • managing and charging drone batteries
  • managing and charging the DJI controller and phone batteries

Automating multiple drones and syncing

using the same drone plane for two drones flying in opposite directions

One solution we found for automating drone flight paths is called Drone Harmony, a little-known aerial surveying tool that has a ton of features for 3D scanning. With it, you can define areas that need to be captured and it generate flight plans with multiple overlapping orbits as seen in the screenshot.

Having predefined flight paths meant we could use multiple drones sharing the same plans, thereby cutting the total capture time by a considerable amount (2 drones = 2x quicker, 3 drones = 3x quicker)

Although Drone Harmony doesn’t support syncing multiple drones as a feature, by exporting the flight plan files between the Android phones, we could import the plans, reverse them (so the 2nd drone starts from the end), or split them and reverse them (if we had a 3rd and 4th drone capturing as well).

Massive amount of atmospheric haze and fog limiting visibility.

This area is known for its gorgeous rolling morning fog, which is heavenly to wake up to but not-so-great when capturing data for photogrammetry.

After the fog, we also encountered quite a bit of atmospheric haze, which really limited the visibility range for the drones. This was an issue as we needed to also be able to capture surrounding mountains from the planned “VR flythrough” zone without needing to fly there.

We had to do our best in maximizing our time out in the field. In the mornings, we planned around the haze by capturing smaller scale environments that the haze was less of a problem at.

Using DXO Photolab’s dehazing feature helped clear up some of the haze as well which helped our texturing image dataset.

Thankfully, things looked great in the final experience when we applied real-time fog and the hard-to-fix haze became less of an issue.

before/after using DXO Photolab’s dehazing feature on RAW images

We’ll explore more about our post processing methods and the real-time fog shader in the next blog posts.

Resource allocation (how many drones per location and for how long) and charging batteries in-field.

The resource management element of a capture project of this scale requires constant iteration and flexibility to know which capture will need more drones/batteries than the other. Flexibility and daily post-mortem meetings were crucial to iterate on how each team felt about their resources.

Charging drained drone batteries in-field maximized the amount of data we could capture in a day. Each team would have 4–5 batteries and depending on each capture, we’d find a nearby store, cafe, or restaurant to set up a battery charging station. This way, each team could keep capturing as they sent a person to take the drained batteries and set to charge.

Alternatively, there are better solutions (like a generator or using large portable battery) which we would rather use for future projects.

Finding areas for best line-of-sight

Sometimes, it’s even hard to plan ahead to where you’ll be launching drones from. When the drone flight path is a few kilometers long and you have limited batteries, it’s important to minimize the amount of battery spent on the drone flying to its initial waypoint.

It’s also important to have proper line-of-sight to ensure connectivity to the drone. If the connection is lost, return-to-home is triggered until the connection is re-established.

Sometimes, to find the best launching positions, we had to bargain our way onto little cow ferries to cross the river and set up launch sites in a farm. Whatever it takes!

different perspective of the header image

That’s all for now folks! The next two parts will cover:

Part 1: Planning and executing a large-scale aerial Photogrammetry project (this article)

Part 2: Photogrammetry processing and optimization

Part 3: VR performance challenges and creating realistic fog in Unity

Download the free Realities app on Steam

Explore the results of our Photogrammetry workflow in beautiful and detailed environments from the around the world in VR.

Download from Steam for Oculus Rift / HTC Vive.

Follow the Realities.io Team

The Realities.io works on the cutting edge of VR+Photogrammetry. Follow us on Twitter.

--

--

Realities.io
Realities.io

Published in Realities.io

The stories and technology behind our award-winning VR experiences

Azad Balabanian
Azad Balabanian

Written by Azad Balabanian

Photogrammetry @realities.io, Host of the ResearchVR Podcast @researchvrcast