by CreamVR and Sheridan College’s Screen Industries Research and Training Center (SIRT).
They are developing an efficient avatar creation process and technology which allows real people to be inserted into the VR environment with high fidelity and simplified technological efficiency. While CreamVR and its research partner SIRT are still fine tuning this, we are very encouraged with the following early results.
One of the great challenges when trying to present photorealistic 3D models in a game engine environment, are the limitations of hardware rendering. When we’re talking about 3D models — we’re talking about objects. An object that is represented by surface, vertices and polygons — a mesh.
The simplest 3D model — a single plane (a square) — is made up of a single polygon. A cube is made up of 6 polygons. As you get more and more detailed, objects require more polygons. The typical technique used by game production studios, is to create or sculpt a ‘high poly’ model of an object, and then from that, output high detail textures. Once the ‘high poly’ has been turned into textures the size and computation demand to render things is reduced and the game can play on consumer level gaming consoles.
Things get a little bit trickier when we’re talking about a moving character, with facial animation. When you start reducing polygons for efficient playback you begin to slip deeper into the uncanny valley. The novel approach we are exploring is proving methods of using a video captured facial performance applied to the face.
The processor, and the performance demands in VR are much higher than regular single-screen 2D games. It is necessary to find ways of portraying the impression of great amounts of detail (visual information), while actually keeping the work on the CPU and GPU down to a minimum.
To see the original article, click here.