7 Generative AI Tools for 3D Asset Creation

echo3D
echo3D
Published in
7 min readApr 17, 2023

--

Credit: freepik / macrovector

Artificial intelligence is showing up in different industries and we are excited at the prospect of 3D asset generation. For game developers and studios, 3D assets are often one of the trickiest parts of development where bottlenecks happen. It costs anywhere from $60-$1500 and 2–10 weeks with back and forth to produce a single model. High-fidelity 3D models are notoriously expensive — expensive to get made and expensive to use, technically.

With the assistance of generative AI and platforms like echo3D, both costs can be brought down. Because AI is able to generate 3D assets at an astoundingly rapid pace, storage for these 3D assets is a growing need. 3D asset management platforms like echo3D can assist with 3D asset warehousing and content delivery.

While new AI tools for 3D are popping up every day so here are some of the ones on our radar now.

Try out Luma AI to create a ping pong table like this one from an iPhone scan. See it in AR!

1. Get3D by Nvidia

This generative AI was trained using only simple 2D images. It can generate 3D shapes with high-fidelity textures and robust geometric details. They are generated in common formats so models can be exported and used immediately. Get3D can generate any 3D object, whether it’s a building, vehicle or character.

Their stance on generative AI is compensating for the lack of detail in 3D environments that take away from the believability of a scene. Their belief is that with AI, small details that would take a team extensive resources to create can be perfected in minutes with AI. For example, randomizing spawning of non-repetitive vehicles or characters in large crowds with believable behaviors.

“GET3D brings us a step closer to democratizing AI-powered 3D content creation,” said Sanja Fidler, VP of AI research at NVIDIA and manager of he Toronto-based AI lab that created the tool. “Its ability to instantly generate textured 3D shapes could be a game-changer for developers, helping them rapidly populate virtual worlds with varied and interesting objects.”

GET3D is open source and available on Github here.

2. 3DFY.ai

3DFY is a text-to-3D generator that can produce high quality 3D models. There’s an option to use the image-to-3D generator in their interface as well (once it’s available). Assets are produced at multiple levels of detail (LOD) with high quality UV mapping to suit whatever your project scope is.

Their process is pre-processing, analysis then synthesis. On the backend, the text or image is standardized and cleaned up. For text, definition-less tokens are removed and inputs are converted to more machine readable language. For images specifically, they are cropped and the object of interest is isolated by removing the backdrop. The data is then rendered into object code where finally a 3D asset is generated from the object code.

You can view some of their 3D models here. Sign up for the waitlist here.

3. Sloyd.ai

Sloyd is made specifically for gaming. It’s a quick online tool for automatic 3D asset creation. Each of the models is UV-unwrapped and optimized for use in real-time so it can integrate directly into your project. The Sloyd SDK can be used for real time 3D asset creation in different environments. The user can generate procedural games or simulations based on their rule sets. Sloyd has a library of generators that can be customized to fit your specific project giving developers the flexibility of when and how to generate 3D assets in real time.

The Sloyd engine can generate millions of vertices in less than 33ms whether it’s server-side or user-side. Each of the assets has detail that matches the vertice count and because the images are generated and not stored, they can help save storage.

Join the Discord to learn more or try their web app in beta or their SDK.

Credit: Sloyd

4. Gepetto.ai

Their philosophy is a little different in that they’re more interested in building a new genre of games and movies centered around AI versus optimizing standard production of 3D assets. Their signature feature Collodi can “build a holodeck-like experience for gamers”.

They’re also creating tools for filmmaking that can automate animation, deep fake, rotoscoping, VFX and special effects. For gaming, they’re developing AI game tools for infinite sandbox gameplay, AI RPGs, complex dialogue mechanics and game worlds that can be generated based on gameplay decisions which is entirely impressive. The output that would be required to make each level with a team is exponentially lessened by generative AI.

There’s a waitlist available for the Collodi 3D AI Model and their AI here.

Credit: Gepetto

5. Luma AI

Luma AI’s Imagine 3D tool allows you to enter a text input to generate a fully solid 3D model with a full color texture. Luma AI is said to produce higher quality 3D assets than some of its competitors because it uses real time imaging for reference. What’s unique about Luma AI is it works with iOS devices so users can generate 3D assets in environments they are already familiar with in the real world.

Users can use their own 2D images to generate 3D models and can edit animations and other details in the web app. This provides a quick pathway to creative uses for AR. Check out the video below to see how they created a portal through a real door using a hardware trigger.

Their app is free on the App Store.

6. Masterpiece Studio

Masterpiece Studio has 3 simple steps: generate, edit and share. They’ve built the “first generative AI” (their words, not ours) to create game-ready 3D assets. The goal is to reach 1 BILLION 3D creatives. That’s a lot of 3D asset management!

Their platform has an entire toolset to help creators generate usable 3D assets. This solves the problem of finding random 3D assets in libraries or asset packs and having issues with details, whether it’s an issue with conversions or UV mapping. The assets produced in Masterpiece Studio are ready to go.

Sign up for the waitlist here.

7. Google DreamFusion

Google’s version of generative AI for 3D models requires no training on 3D model data, but they also generate 3D models a little differently than the other platforms and are not the go-to tool for game development.

The system uses 2D images of an object generated by the Imagen text-to-image diffusion model to understand different perspectives of the model it is trying to generate. This process is called Score Distillation Sampling (SDS) by Google Engineers. SDS creates the basic appearance and DreamFusion optimizes the assets to fill in the model such as adding regularizers and improving geometry. Once processed, these models have high quality normals and can be lit like regular 3D models

Check it out here.

Credit: Google

Honorable Mentions

Stable Diffusion for Blender

This didn’t make the cut because it doesn’t technically generate 3D models but it is worth mentioning! It generates 2D images that can be used for model development and reference material.

Stability AI created a suite of tools as a plugin for Blender that works for existing projects and utilizes text-to-generate feature to create new images, textures and animations. This works similar to their text-to-image generator but it’s customized and built within the Blender program to work with your existing workflow.

An impressive feature of this suite of tools is the ability to generate animations. While their animation feature is not entirely perfect, it’s still quite fun nonetheless. Some people prefer crude animations anyways! This is an entirely free plugin; and doesn’t require any additional software or space to run. To set it up, you’ll need the most updated version of Blender and an API key from Stability AI.

They have a comprehensive list of tutorials showcasing each feature.

Point-E by OpenAI

Point-E Doesn’t generate 3D models in the traditional sense. Instead of generating robust and 3D-ready 3D models, it generates point clouds which are data points in a space that represent a 3D shape. Point-E is made of 2 model types: text-to-image and image-to-3D. In order to produce a 3D object from a text prompt, Point-E will sample an image using the text-to-image model.

Their team acknowledges the products limitations but believes it has a role in technical generative AI, “While our method still falls short of the state-of-the-art in terms of sample quality, it is one to two orders of magnitude faster to sample from, offering a practical trade-off for some use cases,” according to this AI Business article.

The OpenAI research team released the point cloud diffusion models and evaluation code on Github here.

echo3D (www.echo3D.com; Techstars 19’) is a cloud platform for 3D asset management that provides tools and cloud infrastructure to help developers quickly build and deploy 3D/AR/VR games, apps, and content.

--

--