The Metaverse and Artificial Intelligence
The metaverse will be enabled, populated by and supported with artificial intelligence (AI). It will drive all seven technology layers of the metaverse: powering spatial computing, providing scaffolding to creators, and supplying new and sophisticated forms of storytelling. This article will give you a taste of some of these markets, and where we’ll see it soonest.
Few people realize how quickly AI is advancing. Let’s take a look at the exponential growth of Deep Learning Transformers, a type of neural network that allows machines to work with natural language:
The original Generative Pre-trained Transformer (GPT) worked with 110 million parameters; the newest Google Brain transformer will go over 1 trillion parameters. GPT-4 is expected to have even more. This is staggering increase in the size of these neural networks in a relatively brief period of time.
Before the creation of these advanced neural networks, AI had already made impressive strides: voice recognition in the Alexa, machine vision (such as used in autonomous driving systems in the Tesla, or Google image recognition) or the algorithms that seem to surface things that provoke reactions from us on social media. We’ve been amused and alarmed by Deep Fakes… And all of these applications will seem very basic compared to the future of AI.
Let’s start by exploring some of the more interesting applications of AI to the metaverse.
If you do nothing else after reading this article, try out AI Dungeon. It shows you what happens when make an automated Dungeon Master with GPT-3:
An entire category of games — the roguelike — is about computer-generated dungeons to explore. What happens when neural networks like the one powering AI Dungeon make it possible to have an infinite quantity of more interesting quests, characters and storylines?
AI as Creative Partner
When will an AI match the nuanced, textured storytelling of a human being?
Some people say it will never happen. I think that’s wishful thinking — there’s no reason to believe machines won’t be able to match or even surpass us. But in the meantime, the AI could be a powerful creative partner.
This is already happening. For example, Promethean AI is a speech-activated creative partner for 3D spaces:
This leads to an iterative cycle in which the designer describes what they want, selects the best “ideas” and edits down to the experience they desire. You can imagine the same thing applied to music, art, storytelling and worldbuilding.
Next-generation AI Characters
Epic’s MetaHumans project, which just entered early access in April 2021, aims to reduce the time to create photorealistic characters from months down to minutes. Beyond the shape of the character, it also brings them to life with realistic movements and acting:
Wizard Engine, a product created by Fable, is focused more on the interactive storytelling, emotional and communication aspects of the experience. After watching this GPT-3-powered conversation with a virtual being named Lucy, you can imagine how this will change the future of non-player characters (NPCs):
There’s even a company (Authentic Artists) that creates virtual musicians that perform generative music.
Computers are getting better at gesture recognition, which will enable us to interact more naturally with computers — and eventually interpret and understand emotion and body language.
Eye-tracking is another important aspect of immersive interface for virtual reality: the photoreceptors in your eye are densest in a region called the fovea — that’s where your highest resolution perception exists, and everything else is your peripheral vision. Virtual reality needs to render the best information where your eye is focused. AI is being used to predict where your eye will look next, even through blinking, to help prepare the best rendering in advance. This is important for delivering the most immersive experience, and will be important for next-generation technologies such as holographic light fields, which require an exponential increase in graphics processing.
The ultimate interface may be a neural one: if so, AI will make it possible. Everyone’s brain is different, so the role of AI is to learn from and adapt to the uniqueness of each individual. Researchers have trained the Neuralink device to read a monkey’s mind — which is done by using an AI to learn and interpret the data received from the implanted hardware in the monkey’s brain:
AI for Chip Design
Chips are getting harder and harder to design with the compute power needed to support the exponentially increasing demands of the metaverse. Artificial intelligence will help solve for that, too.
Similarly, AI will help optimize manufacturing, network routing, security, materials science and a multitude of other domains necessary to build the future.
Next: the Fusion of Language and Vision
Ilya Sutskever, the chief scientist for OpenAI, had this to say about the next stage of GPT:
In 2021, language models will start to become aware of the visual world. Text alone can express a great deal of information about the world, but it is incomplete, because we live in a visual world as well. The next generation of models will be capable of editing and generating images in response to text input, and hopefully they’ll understand text better because of the many images they’ve seen.
Visual space contains complex information that can inform the way words and narratives ought to be created — this will lead to even more realistic AI storytelling, creative-partnering and machine understanding. The potential for application to the metaverse is mindblowing.
As we move towards a future with further exponential increases in the number of machine-learning parameters, what additional applications excite you? Share your thoughts in my comments!
If you enjoyed this article, you might want to read a bit more about the technologies and layers of the metaverse: