AI Basics for Game Designers: From Philosophy to Markov Chains. Modeling probability and randomness.

Mateusz Dzikowski
6 min readJan 19, 2024

--

Game designers continually seek tools and methods that can enhance their games’ mechanics, story progression, and decision-making. While neural networks and AI have gained significant traction, mathematical models like graphs and Markov Chains offer unique advantages for designing dynamic, responsive, and unpredictable game environments.

In this article, we’ll journey from understanding a couple of fundamental concepts to implementing Markov Chains, allowing game designers to harness the power of probability and statistics in game design.

Ancient Philosophical Puzzles and Modern Stochastics

Andrey Markov, a Russian mathematician, introduced Markov chains in the early 20th century. Before Markov’s, the explicit concept of a Markov chain as we understand it today did not exist. Despite this, ancient holds numerous examples of preliminary considerations on stochastics, laying some groundwork for future explorations in the realm of probability.

“The Thinker,” symbolizing profound introspection, can represent Dante contemplating the infernal journey.

The concept of probability can be traced back to ancient civilizations’ games of chance. Ancient Greeks, including philosophers like Plato and Aristotle, discussed randomness and determinism. However, their discussions were more philosophical and not mathematical in the way modern probability theory is. Plato believed in an inherently ordered universe where randomness was but an illusion. In contrast, Aristotle preached the existence of genuine random events in the universe.

The word “stochastic”, rooted in the Greek term “στόχος” for “aim”, brings forth an intriguing connection, especially in the world of

precision and accuracy.

In any endeavor the distinction between the two is paramount. Let’s consider the mechanics of aiming and shooting. Every competitive player knows that when your shots are both accurate and precise, you’re not just playing the game — you’re dominating it.

To put it in game terms, imagine every shot you take in a first-person shooter game as a data point. There are four targets, each representing a different combination of accuracy and precision. Accuracy refers to how close your shots are to the target’s center, while precision refers to how close your shots are to each other.

  • Low Accuracy, Low Precision: Your shots are scattered, indicating inconsistency and unpredictability in gameplay.
  • Low Accuracy, High Precision: Your shots consistently hit the same wrong spot, suggesting a systematic error or bias in the game’s aiming mechanics.
  • High Accuracy, Low Precision: Your shots hit around the target center but are widely spaced, implying a correct aim but high variability in execution or outcome.
  • High Accuracy, High Precision: Your shots are tightly grouped at the target’s center, the ideal state for a player seeking to master the game.

Each action in a game, much like the ancient games of chance, can be seen as a part of a stochastic process, a sequence of events influenced by random variables.

State machine

Aiming and shooting in a game can be modeled as a Markov chain, where each shot represents a state transition. The probabilities of hitting or missing the target could depend on in-game factors such as the player’s skill, weapon accuracy, or even the game’s inherent randomness.

So what is the missing link between these two graphs?

The answer is: literature!

Andrey Markov’s theory was indeed influenced by his interest in the statistical patterns of letters in literature. He sought to prove the Law of Large Numbers for dependent events, as opposed to the then-common application to independent events like coin tosses.

Markov was intrigued by the work of other mathematicians who were analyzing the frequency of vowels and consonants in Russian poetry. They treated the letters in text as a sequence of independent events, but Markov suspected that the letters were not independent — that the appearance of a certain letter could depend on the letters that came before it.

This was a departure from existing statistical methods which typically assumed that each event (or letter, in this case) occurred independently of others.

Markov’s analysis involved creating chains where each letter was a state. He then examined how the sequence of letters transitioned from one state to another. For example, if the letter ‘a’ appeared, what was the probability that it would be followed by a vowel versus a consonant?

The culmination of Markov’s work, opened the door to further advancements in understanding sequences and probabilities. One significant extension of this concept is the Hidden Markov Model (HMM).

In an HMM, we observe a series of outputs generated by a sequence of hidden states, and the model aims to infer the hidden states based on the observable outputs. They are well-suited for modeling sequential data with temporal dependencies. HMMs are trained using algorithms like the Baum-Welch algorithm for unsupervised learning. They can be used in cases where labeled data is limited or unavailable.

How to apply Markov chains in game design?

In the realm of game design, applying Markov chains can significantly enhance the dynamism and realism of in-game AI behavior. By integrating these probabilistic models, designers can create characters that react in varied and unpredictable ways, providing a richer gaming experience.

State Machine: A Markov Chain in Action

Let’s see how Markov chains can be utilized, particularly within the context of our game Zombie Kebab VR.

Consider the behavior of a zombie in a survival horror game. This list of states represent a set of zombie’s potential actions:

  • Idle: calm state, unaware of the player’s presence.
  • Wandering: aimlessly moving until something catches its attention.
  • Chasing: As in sensing its prey, it engages in pursuit upon the player’s detection.
  • Attacking: In close combat with the player.
  • Dying: The final state leading to total decay.

The transitions between these states are influenced by game-world stimuli — much like the way ancient philosophers believed external forces influenced the roll of dice or the flight of an arrow. For example, the proximity of the player can affect whether the zombie starts wandering or chasing. Tweak the transition probabilities, and you alter the tension and pace of the game. Add more enemies and time pressure to increase difficulty.

The result should be a gameplay mechanic that feels organic, as if having a set of natural laws, while being fueled by a probabilistic model.

If you are interested how do we implement the opponent’s behaviors in our upcoming VR title, visit www.zombiekebab.com

--

--

Mateusz Dzikowski

Game Designer | Linguist | Educator | Artist | Crafting immersive gaming experiences in VR