Theta EdgeCloud: Redefining 3D Rendering for GenAI Applications

Theta Labs
Theta Network
Published in
5 min readMar 10, 2024


3D rendering is a massive industry, surpassing $3 billion in 2022 and projected to grow 20% annually to more than $35 billion by 2032 according to Global Market Insights. The market growth is driven by marketing professionals across multiple industries looking to demonstrate product features in a simulated 3D environment. NVIDIA launched a platform addressing this need in January 2023 offering new ways to visualize and simulate industrial discoveries and designs in 3D spaces. More interestingly, next-generation text-to-3D and sketch-to-3D generative AI applications will further drive the need for 3D rendering capabilities and require more GPU resources than ever.

These two emerging generative AI categories utilize deep learning technologies, such as Diffusion Models, Neural Radiance Fields (NeRF), and Gaussian Splatting, to generate 3D objects and scenes based on textual input or hand-drawn sketches. The AI model requires multiple 3D rendering cycles in the generation, refinement and optimization process as it creates and assembles the 3D textured meshes of geometric shapes and textures and enhances its realism, accuracy and detail. NVIDIA recently presented a research paper at IEEE conference on computer vision and pattern recognition that captures these concepts.

Only Possible on Theta EdgeCloud

The hybrid Theta EdgeCloud architecture, when fully released this year, can uniquely support these 3D rendering tasks in a manner not possible today by harnessing high-end GPU power of cloud hosted nodes as well as Theta’s globally distributed edge nodes. As described in the white paper, EdgeCloud will implement innovative compute task scheduling and assignment technologies to intelligently assign and route compute tasks in real-time to the most optimal cluster node. In particular, the task assignment algorithm tries to maximize the efficiency score of that particular on-demand job however small or large:

efficiency_score = f(job_type, price, latency, availability, computing capacity, fairness, privacy)

The technology described in the NVIDIA research paper above can be used to illustrate potentially how text-to-3D generative rendering tasks can be efficiently run on Theta EdgeCloud. The proposed text-to-3D technique can essentially be divided into two stages:

  • Stage 1 performs low-resolution 3D mesh optimization where repeated sampling and rendering is required. The computational demand of this stage is relatively low, and hence it can potentially be performed by community shards or Theta edge nodes at a lower cost.
  • Stage 2 involves high-resolution rendering and optimization into the final 3D mesh model with detailed textures, which requires more intensive processing. Thus, this stage can be most effectively executed on high powered EdgeCloud GPUs.

The EdgeCloud task scheduling and assignment algorithm treats the two stages as different tasks, and uses the equation above to calculate the efficiency score of mapping these tasks to different nodes in the EdgeCloud. Based on the scores calculated, the algorithm determines the optimal assignments of the tasks to optimize both efficiency and overall cost.

Backed by Team Experience and Patents

The Theta team has an extensive history in 3D rendering in the gaming industry. As early as September 2016, Theta Labs (formerly pioneered the first-ever VR 360 livestream of a major esports tournament at ESL One in partnership with NVIDIA, live with 10,000 spectators at the Barclays Center in New York. Running on a cluster of NVIDIA GTX 1080 GPUs, the fastest at the time, the team was able to tap into the Counterstrike 3D game engine to render the game world in 360 VR in real-time, with 4K per eye resolution, and live stream to over 50,000 viewers around the world. This was truly groundbreaking.

“CEO Mitch Liu, CTO Jieyi Long and Theta team behind the scenes at ESL One New York, 2016”
The Future of Watching esports is Live VR. And It’s Here! ESL One New York”, NVIDIA GeForce Youtube

Much of the technology necessary for real-time 360 VR rendering simply didn’t exist at the time. So, Mitch, Jieyi, and the Theta team had to invent them. These innovations led to Theta Labs receiving 4 U.S. patents for 360 rendering and related technologies:

Methods and systems for game video recording and virtual reality replay (US9473758B1)

Methods and systems for non-concentric spherical projection for multi-resolution view (US9998664B1)

Methods and systems for virtual reality streaming and replay of computer video games (US9573062B1)

Methods and systems for computer video game streaming, highlight, and replay (US9782678B2)

Together, these patents describe how Theta was able to render multiple in-game 2D video streams from a central viewpoint, which are then digitally stitched together to create a 360 spherical video that can then be controlled by the user. The portion of the sphere currently in the viewer’s sight is rendered at high resolution, while areas outside of the current view are rendered in lower resolution, optimizing for bandwidth constraints. As the user moves the field of vision, the different images are rendered in high resolution in real-time.

Theta Edge Nodes to Launch 3D rendering test tasks

Fast forward to today, the Theta team is leveraging its experience in 3D 360 rendering and moving towards EdgeCloud launch on May 1 with a focus on AI compute jobs. In the next few weeks, Theta edge nodes will be upgraded to support 3D rendering test jobs, which will be rewarded in TFUEL and in preparation for integration into the EdgeCloud platform. In later releases this year, EdgeCloud will enable full 3D rendering and AI applications utilizing 3D pipelines, for example, to process complex 3D visual effects for movies and animations, with the scalability and reliability required by the largest media companies.



Theta Labs
Theta Network

Creators of the Theta Network and EdgeCloud AI — see for more info!