DeepBump for faster 3D graphics.

Jacobncyr
3 min readJun 30, 2023

I set out to write this article with one thing in mind: make my life easier, and if I could do that for free, even better. One of my biggest challenges in my 3D journey is understanding the advancements in modern graphical systems that utilize Artificial Intelligence — or, as I simply call it, computer vision.

The advancement of Chatbots to the current ChatGPT standard has some pretty interesting implications for someone involved in 3D graphics. These tools are essentially how we’re going to be able to move mountains of code one day with minimal effort. As a seasoned software engineer, I prefer to do things the copy-paste way if I’m not learning anything new. There’s no sense wasting time reinventing the wheel. A quick selection of all the text content on the GitHub repo, and ChatGPT was able to tell me how to install the Blender Add-on quite easily.

Setting up the tool was easy to do. The Blender interface allows you to find the tool within your computer and conveniently download the required dependencies. Keep in mind that everything in Blender is running behind the scenes in Python. The entirety of Blender’s interface can be controlled directly by the bpy module. With the tool ready and working, I loaded in a 2-dimensional flat picture to use the model on. The initial picture for the model to generate normal curves on was very basic; here is a picture of it.

At the moment, the picture is just a 2-dimensional image with no depth or height information included. The deep learning models involved in generating that information are trained on hundreds, maybe thousands, of preprocessed 3D images. The learned algorithm produces normal maps to help with the lighting on the 3D plane. Normal maps are essential to the artist’s profile for convincing 3D graphics, as they calculate the light in a perpendicular fashion pixel by pixel, much like vectors.

Using the easy and intuitive interface, I was able to import this picture, highlighted in green. The interface provided an easy button to generate its normal maps, also highlighted in green. By simply using the node editor, I was able to connect the generated light maps to the plane. With minimal effort, the shader nodes of the material were ready.

Lo and behold! The model had seen enough successful normal maps to generate light information for each individual brick. Although not recent, this kind of technology is what’s under the hood in more modern game engines, such as Unreal Engine and Unity 3D. This kind of fancy automated technique is something that would take quite a bit of effort to detail the individual bricks for the normal map. With the help of ChatGPT, I was able to use custom software to save a lot of time in my workflow.

2D image on the left. 2D image with generated normals on the right. Here is an example of the generated light maps in action. Notice the individual light information on each brick. This information was generated using the DeepBump add-on from GitHub at https://github.com/HugoTini/DeepBump.git with a random brick image downloaded from Google Images. Image source: blocks-bricks-brickwall-761142.jpg

--

--