Making Optimization for 3D Content Possible

Umbra 3D
3 min readMar 15, 2018

--

The Umbra Workflow

When starting to use Umbra, the very first thing any user does is sign up for a user account at umbra.io, the web front end for managing users, their content and licenses. At umbra.io, the user can also download various Umbra plugins for different software, such as Unity and Autodesk Revit. Once they’ve installed Umbra into their Unity project or equivalent, they’re ready to start actually working with it. The workflow consists of three main parts: export, optimization, and streaming.

Figure 1 — The Umbra workflow

In the first stage, 3D content is exported into Umbra’s cloud-based 3D optimization and hosting platform, Composit. For instance, Umbra offers a Unity add-in, which can be used to export a Unity scene into Composit.

As Umbra offers APIs for various languages and environments, the Umbra export can be integrated into any source of 3D data, be it a 3D engine toolchain such as Unity or Unreal Engine, a 3D modeling software such as Autodesk Revit or Graphisoft Archicad, or any other source such as file format exporters or point cloud streams. Supporting virtually any kind of 3D, regardless of format, topology, complexity, and size, has been a key design principle. The export stage is fairly quick and simple, as it consists only of gathering the 3D shapes and materials (or in some cases, point clouds) and transmitting them securely into the cloud.

After exporting the 3D assets into Umbra, the 3D optimization process follows. This process occurs in the cloud, where computation resources can be allocated according to assumed computational burden and complexity of input. Umbra will generate a reconstruction of the input 3D model which lends itself to streaming with adaptive levels of detail. The reconstruction is hierarchical, meaning the model can be of any size, such as an entire planet and Umbra will process each node in the hierarchy individually and in parallel. The optimization phase typically takes some minutes to complete, depending on the complexity and the desired output resolution of the 3D model.

Figure 2 — The level of detail spectrum as shown on a part of a model

Once the 3D data has been Umbrafied, it can be streamed into the application for rendering. The Umbra runtime will select the appropriate levels of detail in the hierarchical representation and stream those in priority order, according to where the camera or the headset is located and what it is looking at. The runtime will stream in the data at the desired fidelity, depending on the network capacity and device capabilities.

Of course, the Umbra runtime can be integrated into any rendering application and it is in no way Unity-specific. While the Umbrafied 3D data is already heavily optimized and compressed, the Umbra runtime will also utilize caching, so that the 3D assets will not need to be re-transmitted constantly. Entire sets of Umbrafied assets can also be synced for offline applications in situations where network connections are unreliable.

Figure 3 — Level of detail in the hierarchy determined by the camera position and orientation in the runtime

What results is a pipeline which allows the user to deploy any kind of 3D data, regardless of size and complexity, quickly and easily into any kind of device at a chosen fidelity. We’ve used it to build experiences such as one-click deployment of complex construction models onto the iPad or HoloLens for on-site AR 1:1 scale inspection. Here’s a video showing you how you might actually use it:

Do you want to get your hands dirty and try it? Go to umbra.io and check it out!

--

--