Lightmapping on the server

Lightmapping in the scenevr editor

There is a great subreddit called lowpoly, where people post amazing images that look very simple, but somehow realistic. There’s also the wonderful art style of Timothy J. Reynolds, which is low in polygon count, but feels like a real place that you could actually visit.

I believe that this kind of art lends itself well to creating the metaverse. Some advantages:

  • Less triangles to render
  • Fewer triangles to test physics against (no need for a decimated collision mesh, just reuse the visual geometry)
  • Less textures to load into memory (if you are only using one color per face, you can just use the lightmap alone, which is one big 2k by 2k texture)
  • The lighting can be calculated by a global illumination renderer with little user direction
  • It doesn’t require extensive texturing from the artist

The last point is an interesting one. Using sketchup, maya or magicavoxel to construct geometry is a skill that many people can pick up. I mean, even minecraft let’s people construct geometry. But fewer people learn how to texture effectively. Creating an efficient geometry is difficult, but accurately using textures is another whole area of study. If people just need to apply flat colours to their models, then it reduces the amount of work people need to do to get from an idea in their mind to a virtual space they can walk around.

Job Simulator — fewer textures, lots of color per face materials

The problem with scenes that have per-face lighting, is that they look atrocious without some nicely calculated global illumination. For example, take this sketchup model I made in a few minutes. You can’t even really tell what you are looking at.

This is a prototype of a model of a mausaleum

That is the simple collada .DAE model loaded into the SceneVR editor. Here is what it looks like after you position the model where you want and press the bake button.

Bake a cake! (Actually — just calculate the lighting for the scene)
The same model about 30 seconds later

Instantly you can tell the inside from the outside of the model, you can see the light from the sun casting rays inside past the pillars, and you get a sense of how tall and deep the model is.

You also get a rendering that looks a bit like an architectural mockup, those little paper houses that the architecture student interns make up to show what a new subdivision is going to look like.

Anyway, lightmap baking in the SceneVR editor isn’t 100% finished yet, but I’m hoping this feature will be ready in a week or two for everyone to start playing with and start building little scenes that look realistic with a smaller amount of work.

ps: This feature came out because I was going to write a medium post about how to use lightmap baking in blender, but to be honest, teaching someone to use blender is such a complex topic that it was easier just to write a script that controlled blender and baked scenes directly in the SceneVR editor. 😂