Incubator Research Labs: Autonomous Vehicle Avatar
What becomes possible in a world of autonomous cars, where, sans drivers, everyone is a passenger? Where attention can be cleaved from task-based driving and roads take on the mechanics of a hive mind. This question has driven the car industry to research new concepts for vehicle interior experiences and displays. One company* approached the Gray Area team, hoping to bring an artistic perspective to their design process. They were looking to develop an avatar that lives on the car dashboard and compliments this new type of automobile experience. The avatar envisioned would transform and animate in response to the passengers’ moods, music playlists, and driving data.
Gray Area staff members Josette Melchor and Chelley Sherman offered to work on the project along with 3D artist and developer, Ray McClure. Ray’s interactive studio, Dreamboat, studies multi-user VR meditation and generative soundscapes. Ray is also a VR mentor for the Gray Area Cultural Incubator program.
Gray Area is using the development of this project as a model for collaboration between our digital arts community and corporations seeking more creative solutions in their product development. We wanted to share our process as a resource for other artists and companies looking to embark on a similar partnership.
Our collaborator desired an organic look and supplied examples of an artist mixing paint as a form of performance or video art. Over the course of 3 months our teams met weekly to develop a digital experience that blends the artistic references and shape shifting requirements.
The primary challenge of the project was to develop a user-driven system for changing the avatar’s shape and animation style. We tried different approaches including a blobby liquid form before coming up with a solution involving skinned blend shapes that enabled a wide variety of interesting possibilities. We would set a series of components that could individually change form and arrange themselves in different configurations. This would allow it to move from an organic animated 3D form to an abstract terrain that displays information or moving images. In a 3D modeling software we created basic strips of geometry that could blend into pipes, cones and cloth. These objects were bound to a simple skeleton for articulation.
Unreal Engine was selected as the platform to develop the experience. Ray has been using Unreal Engine for the past year to create audio reactive projection art and VR experiences. Ray felt Unreal’s 3D editor and visual programming language were ideal for rapid prototyping and transfer of knowledge upon completion, but it was primarily chosen for its beautiful lighting and environmental effects. Unreal’s virtual reality features allowed our collaborator to experience and review the ongoing progress in an exciting and immersive way.
Once imported and set up in the Unreal editor we instanced 30 of the shape-shifting objects and explored different arrangements and mixes between the blend shapes- from 2D planes into undulating tentacles to balls of crumpled paper and cloth. Using Unreal’s layered materials we could combine different looks and material properties including video projected onto the surface of the avatar. The audio and driving data were represented in different ways for each form such as glowing emissive tentacles that danced to the music while swaying to the driving direction and wireframe relief maps that bounce. Finally each form was mapped to a keyboard key allowing viewers to morph between states. A GUI overlay was set up to for controlling the influence of driving inertia and mixing textures and materials.
The blend shape solution worked out well both in capability and presentation. We continued to create new forms until the day the project was presented including a standout moment where the avatar resembles a dancing ring of fire.
The result of 3 months of collaboration and development was an interactive experience that allows passengers to select from 8 distinct avatar forms that transition between each other and animate in response to music, driving data, and the color of the car environment.
The future automobile experience that our client is iterating towards can be most effectively translated through VR visualizations. With more time we would like to develop a minimal representation of an autonomous car in 3D so the avatar can be displayed in relation to its intended environment. Therefore, our next step is to prototype this augmented driving experience in a virtual simulation.
*Gray Area is not authorized to disclose our collaborator at this time.