Augmented Reality Games Need Artificial Intelligence
Augmented Reality (AR) environments are those in which digital and physical objects co-exist and can be interacted with in real time. Augmented reality is achieved via headsets or glasses that render digital graphics overlaid on what the user can already see. With the highly anticipated Microsoft HoloLens and Magic Leap headsets, augmented reality and mixed reality is poised to enter the mainstream for the first time. Google Cardboard is already available.
Augmented reality — or more accurately Mixed Reality (MR) — is already being eyed as the next new innovation in platforms for computer games. Mixed reality games are different from traditional games in that digital game content is intermixed with the real world physical environment.
Microsoft has demoed games in which monsters burst out of walls, Minecraft on coffee tables, and platform games (Young Conker). Magic Leap promises to turn your office or living room into a first person shooter arena.
For Mixed Reality games to work, they need artificial intelligence. Each user will be accessing the game from a unique physical environment with unique physical affordances — the configuration of walls, furniture, coffee cups, etc. Machine vision algorithms are needed to make sense of the player’s physical environment to some extent. For example, fiducial markers or objects with recognizable surface patterns can be used to position virtual objects in the physical world. Horizontal and vertical surfaces can be identified so that virtual characters don’t fall to the floor or can appear to burst out of walls.
However, what this means is that the game will be different every time it is played in a new place. It is thus incumbent on the player to choose the physical environment that makes for a more enjoyable gameplay experience. A room with more surfaces or different arrangements of furniture could result in more or less enjoyable — or more or less challenging — gameplay.
More and Different Artificial Intelligence
Artificial intelligence can be used to enhance mixed reality games beyond what is offered by machine vision surface detection. In particular, a class of artificial intelligence algorithms called Procedural Content Generation (PCG) can automatically optimize players’ gameplay experience by accounting for the specific configuration of players’ unique physical environments. Procedural content generation is the use of algorithms to automate the production of various aspects of computer games, such as terrain, levels, missions, weapons, and monsters.
Procedural content generation has been used to create maps and levels for computer games such as Super Mario Bros. Maps and levels constrain what the player can do and has to do to progress. A good design makes for an engrossing time. A bad design, well… odds are good that your living room was not designed to be a good Mario game level or first person shooter level. But maybe parts of it are.
Procedural content generation algorithms ask the question: what should be in a game? Simple algorithms randomly produce content. For example Minecraft randomly creates landscapes. More complicated artificial intelligence algorithms attempt to factor in player preferences and other knowledge about aesthetics into the process in situations where it really does matter to the player whether the content is good or bad.
Procedural content generation algorithms for mixed reality games must analyze the player’s physical environment for aspects that can contribute to or hinder an enjoyable experience and then augment those aspects with digital graphical assets. The flip side to asking what should be in a game is asking what should be left out. Does using constraints to improve an experience seem weird? In fact it is common. We all want James Bond to survive, but we must put him in danger first to feel suspense and subsequent catharsis. We all want to win the game, but the game has to challenge us first.
For example, consider a mixed reality version of Super Mario Bros. Whereas in Microsoft’s Young Conker game, the player can move in any direction and thus select the order that furniture is visited (to better or worse effect), a mixed reality platformer could constrain the specific order the player visits each piece of furniture to control for the length of the gameplay experience, create rhythms (an important part of the platformer genre), or create challenge progressions. See the mockup to the left.
Above is a screenshot of an actual mixed reality game currently under development in the Entertainment Intelligence Lab, loosely based on the popular Lemmings game. The screenshot shows the game being played in virtual reality to show how the system was able to scan the room. The user must escort a line of lemming-like creatures from one surface to a goal on another surface without falling to the floor (the floor is lava). The user can place virtual boxes and jump pads to redirect the line of lemmings. The AI determines the best surface to start on, the best surface to finish the game on, and the best sequence of surfaces to visit in between. Virtual walls are created to constrain the player from visiting surfaces that are not part of the AI-chosen sequence or visiting surfaces out of turn.
What makes a good MR Lemmings level? This is still a topic of research. Procedural content generation requires an evaluation function, which basically boils a design down to a single number indicating its “goodness”. The higher the number the better the AI thinks the level will be for the player. Mixed reality games provide a new dimension for evaluation functions to consider: the way that the user moves through the physical environment when interacting with virtual assets. Consider the way the player must move through a room to play MR Lemmings… A route can require more or less movement from the player to view the lemmings from the best angle. The figure below shows two possible routes for the lemmings and the likely corresponding movements of the player. An evaluation function can reward the AI for producing levels that require more or less physical movement from the player. This can be very important if the user has limited mobility or if physical fitness is a motivation.
Artificial intelligence can also be used to recognize objects in the player’s physical environment. Recognizing couch and chairs and inferring that they can be bouncy can provide novel gameplay opportunities. Recognizing that they player is in a bedroom vs. a living room vs. an office can enable an intelligent PCG system to create thematically appropriate enemies (e.g., pillow monsters vs. killer paperclips). Maybe the AI could decide that coffee cups provide power ups and route the player toward them or limit access to them via a virtual wall.
Because the game plays out with virtual assets overlayed on the real world, artificial intelligence is needed to recognize and respond to changes in the real world. One cannot assume a static environment that can only be changed by the in-game actions of the player. Can the player cheat by moving a piece of furniture? Can a young child knock something over, blocking the player avatar from progressing?
A Dialogue between Player and Furniture?
It is well understood that one of the advantages of procedural content generation is unlimited replay value. In many games, the replay value decreases after each play because the game does not change and players become bored. Procedural design of new levels can provide new experiences to the player. Those new experiences can be tailored to the user’s skill level to create more or less difficult challenges. It would be trivial to design a AI procedural content generation system for mixed reality games that tried to avoid repeating the exact same level.
In virtual games, the player is entirely at the whim of the AI procedural content generator. In mixed reality games, the procedural content generator must factor in the furniture configuration in the player’s room. The player thus has the opportunity to influence the AI by reconfiguring the furniture in his or her environment. Does the player start to think about how the furniture creates different game play experiences? Does the player start to reconfigure furniture to influence the AI? To make it easier or harder for the AI to generate levels?
Summing Up: Game AI is Game Design
Artificial intelligence is not foreign to computer games. Even the earliest computer games such as Pac Man used simple forms of artificial intelligence. Those ghosts were intelligent? I’m using the broadest possible definition of “artificial intelligence”, which includes the creation of the illusion of intelligence by whatever means suffice. From that point on, almost all games have used some form of artificial intelligence to control opponents and non-player characters. In most cases the best solution to creating the illusion of intelligence is not the most sophisticated algorithm, but the one that runs in real time and provides the designer a modicum of control over what the opponents and non-player characters do in response to player actions. Sometimes a finite state machine is all that is needed; a good finite state machine can make a great game work after all. And the end goal, above all else, is creating a “fun” experience for the player.
Mixed reality games will use the more traditional forms of illusion creating artificial intelligence algorithms but will also require cutting-edge algorithms for machine vision and reasoning about the affordances of the real world. Without AI procedural content generation, mixed reality games may never progress beyond the stage of demonstrations into a genre that people want to bring into their homes.