Facing Your F.E.A.R

F.E.A.R.: First Encounter Assault Recon is a first person shooter by Monolith Productions that was released on PC in 2005, with a console port to PS3 and Xbox 360 released in 2006. The game revolves around a special forces unit sent on a mission to neutralise a target that has assumed control of an army of telepathically controlled soldiers. This already wacky premise takes a dark and violent turn, as strange paranormal phenomena manifesting in the form of a young girl begin to haunt both friend and foe, leading to a bloody confrontation with the player.

The game was praised on release for having intelligent and challenging AI adopted for the soldiers. In fact despite many subsequent innovations, the AI of F.E.A.R. is still considered in some circles as the standard for first-person shooters. Arguably what drives this is the interesting balance between authorial intent by programmers and designers alongside emergent gameplay that comes from the flexible AI of the NPCs. In this piece I’m going to give an overview of how the AI works and what design decisions helped make it still feel fresh and exciting to play over 10 years later.

Building the AI of F.E.A.R.

When dealing with an an AI character or agent, be it for games or any other domain, typically we are focused on three key things:

  • What information do we have of the world? Knowledge that we can encapsulate within a simple encoding, known as a state.
  • What actions does the agent have available to them? How can they transform that state into different circumstances that may ultimately prove more desirable.
  • Lastly, what are the agents goals? By understanding their intent, we can use a search algorithm to find the optimal state transitions (using actions) to reach that goal.

These principles continue to hold, though in this case they are changed ever so slightly to suit the needs of gameplay. Typically when we think of AI goals they are pointed towards what we want to do: we want to navigate an environment and reach a destination, we want to pick up and move an object, we want to figure out how best to rescue people from a natural disaster. Each of these goals in some sense actually express the actions we want to take.

The overall goal of NPCs in F.E.A.R. is not to kill the player. Their goal is to eliminate ‘threat’.

This is where the AI of F.E.A.R. makes an interesting choice: the goals of the NPCs do not focus on killing the player, they focus on eliminating ‘threat’. That threat is driven predominantly by the player and AI characters are given range of actions to choose from to respond in kind; leaving plenty scope for engaging behaviour.

The NPCs in F.E.A.R. are focussed on removing the threat presented by the player.

Thinking Long-Term

One key issue these NPCs need to consider is to build some kind of long-term strategy. Traditionally — and even recently — enemy NPCs in shooters are rather reactive in nature. Meaning they focus on acting with immediacy without consideration of what happened before or could happen in the future. However the problem that emerges here is that their behaviour will neither be deliberative, nor emergent. It will be tied to specific behaviours in defined contexts. In order for the more interesting NPCs in F.E.A.R. to work, they need to think long-term. Rather, they need to plan.

Planning (and scheduling) is a substantial area of Artificial Intelligence research, with research dating back as far as the 1960’s. It is an abstract approach to problem solving; reliant upon symbolic representations of state conditions, actions and their influence. Often planning problems (and the systems that solve them) distance themselves from the logistics of how an action is performed. Instead they focus on what needs done when. For those interested, I would encourage reading (Ghallab et al., 2004) which provides a strong introduction to planning and scheduling systems.

NPCs in F.E.A.R. adopt planning techniques to think long-term about what they want to achieve.

One benefit of a planning system is that we can build a number of actions that show how to achieve certain effects in the world state. These actions can dictate who can make these effects come to pass, as well as what facts are true before they can execute, often known as preconditions. In essence, we decouple the goals of an AI from one specific solution. We can provide a variety of means by which goals can be achieved and allow the planner to search for the best solution.

However, at the time F.E.A.R. was developed, planning was not common in commercial games (Orkin, 2004). Planning systems are typically applied in real-world problems such as power station management and control of autonomous vehicles underwater or in space. As such, the team at Monolith needed to build their own implementation from scratch to work within a game engine.

G.O.A.P.: Goal Oriented Action Planning

The approach taken by Monolith, as detailed in (Orkin, 2005), was known as Goal Oriented Action Planning. The implementation attempted to reduce the potential number of unique states that the planning system would need to manage by generalising the state space using a Finite State Machine (FSM).

As shown in the figure above, the behaviour of the agent is distilled into three core states within a the FSM:

  • Goto — It is assumed that the bot is moving towards a particular physical location. These locations are often nodes that have been denoted to permit some actions to take place when near them.
  • Animate — each character in the game has some basic animations that need to be executed in order for it to maintain some level of emergence with the player. So this state enforces that bots will run animations that have context within the game world. These can be peeking out from cover, opening fire or throwing a grenade.
  • Use Smart Object — as explained in (Orkin, 2005), this is essentially an animation node. The only different is the animation is happening in the context of that node. Examples of this can be jumping over a railing or flipping a table on its side to provide cover.

That’s it! The entire FSM for the NPCs in the game is three states. Note that these states are very abstract; we do not know what locations are being visited, the animations played and the smart objects used. This is because a search is being conducted that determines how the FSM is navigated.

F.E.A.R. AI is robust, reactive and proactive where necessary.

Bear in mind we traditionally utilise events in order to move from one state to another in a FSM. These events are typically the result of a sensor determining whether some boolean flag is now true, or an ‘oracle’ system forcing the bot to change the state. This then results in a change of behaviour. However in F.E.A.R., each NPC uses sensor data to determine a relevant goal and then conduct a plan that will navigate the FSM (using grounded values) that will achieve that goal.

Planning in G.O.A.P.

As mentioned earlier, in planning we require a symbolic representation that models the world. In the case of GOAP, it is reliant upon the STRIPS planning language.

STRIPS takes the name from the planning system in which it was originally used: the STanford Research Institute Problem Solver was developed in 1970 and used a simple representation of actions and goals to create a planning formalism (Fikes and Nilsson, 1972).

In the problem shown above, we express the world using two key expressions, at(x) which denotes where we are in the world and the relation between one location and another using adjacent(x,y).

We then have a action called move(X,Y) which states that we must be in the original location (X) and that it is directly connected to another (Y). If we do this, then we express that we are now in the second location Y while removing the original fact that we were in location X.

The top of this STRIPS instance explains that we are in location A and our goal is location B. This will result in a plan with one action: move(A,B).

In F.E.A.R. actions and information about the world are modelled in a manner akin to STRIPS, with goals determined at runtime and searches conducted that will find the list of actions that will determine what to do.

As mentioned before, planning allows us to decouple actions from goals, but it also means we can enforce who can execute actions using preconditions. A range of actions can be created that are unique to different types of characters in-game. Meaning that more emergent behaviour can arise.

Of course it doesn’t work as straightforward as that; design decisions were made under the hood in order for all of this to work.

Memory Management

Arguably the key issue that would perhaps inhibit performance is the need to keep memory consumption to minimum. The team at Monolith sought to keep memory costs down, with a combination of storing hashtables of actions based on their effects, as well as a blackboard which collected all data that was globally relevant to the NPCs. Even then, the garbage collection had be managed thoroughly to ensure that this did not become too excessive (Orkin, 2005).

Another issue mentioned in (Orkin, 2005) was the need to prune the search tree once established to prevent bots from inadvertently searching for ‘invalid’ solutions by providing “context preconditions”, these would prevent certain actions from being explored, thus optimising the search approach. Once again the question is raised if whether using an actual planning system and formalism would have proven more cost effective given the design considerations needed to generate what are in reality very short plans.

I imagine that the need to manage the AI knowledge base of F.E.A.R. helped drive the minimum memory requirement to 512MB, which puts it in the same category as Quake 4 and Battlefield 2, despite being a game that focussed on small skirmishes and seldom moved into big combat scenes.

Dynamic Goal Assignment

Goals are needed for a planning agent to plan. Goals are typically assigned directly, as seen earlier in the STRIPS example. In this instance, each character type was assigned a number of different goals it can solve and actions that it could commit.

A collection of goals and actions that assigned to a specific NPC type.

Goals would be assigned to the characters by continually re-evaluating their priority based on information received by sensors at runtime. This could result in a quick change in priority mid-gameplay should the player walk into a room or throw a grenade.

Basic and Squad Behaviour

To the credit of the developers, the behaviour of the NPCs is fast, fluid and intelligent. It is rare for an enemy to make a stupid mistake that opens them up to a quick kill. Naturally there are moments when the NPCs will expose themselves, but this is the reality of creating enemies in a first-person-shooter: they’re designed to be killed after all. The real benefit is that they are quick to react, making interesting use of the environment and keen to counter-attack if the opportunity presents itself.

Also you will note that the agents appear coordinated: they are not prone to falling over one another or getting in each others way. This squad behaviour is largely built upon what already works in the GOAP approach. However, it is in truth rather deceitful: given the AI is not as coordinated as one would think.

As detailed in (Orkin, 2006), the game is reliant on a coordinator or manager that ensures NPCs do not cluster too close together or break too far apart on the game map. These simple behaviours assign goals to the bots that look like coordinated squad behaviour, but are reliant on the established AI framework already discussed. Having found available NPCs to participate, a number of behaviours can be executed:

  • Get-To-Cover: ensures all squad members not currently in cover to do so.
  • Advance-Cover: moves squad members to cover that is closer to the threat.
  • Orderly-Advance: moves NPC along firing lines while covered by a teammate.
  • Search: Pairs of NPCs systematically search the local area.
A character may attempt to advance to cover, only to fallback after the player throws a grenade towards them.

Typically the squad coordinator does not need to provide any further information except the latest goal or task that was assigned to them, given that each agent has its own sensory information to help them make that decision.

There are no complex squad behaviours. [They are] the result of two separate NPCs committing to simple squad behaviours at the same time.

However, it’s important to note that this is not always a priority for the character. Bearing in mind that goal prioritisation is still active. As such, should a squad behaviour run the risk of death due to executing a squad behaviour, it has the option to ignore and instead prioritise another goal that eliminates immediate threats (such as being in the firing line of the player or a grenade being thrown nearby).

Beyond this, there are no complex squad behaviours. Players will recognise that at times the NPCs will appear to coordinate a flank or retreat behaviour. However, this is actually the result of a two separate NPCs committing to a simple squad behaviour that in actuality looks far more complicated.

Squad behaviours such as flanking are co-ordinated by the squad manager and not the NPCs themselves. Each AI doesn’t even know that the other exists, nevermind that they appear to be working together.

The other surprising squad element is the implied coordination due to communication. It can be heard in gameplay that the soldiers will shout exchanges to one another, suggesting that their comrades should take cover or take an opportunity to flank the player.

In reality, the squad coordinator is overseeing the actions being committed as part of the squad behaviour. It will find the appropriate dialogue and assign it to the correct actors (Orkin, 2006). This means that an action that is being executed will then result in dialogue being assigned to local actors to help simulate the idea of coordination.

Conclusion

The creation of GOAP had a tremendous impact on game AI. While the resulting implementation proved popular with game developers, permeating through the community to a range of different titles, the actual AI science behind it was rather straight forward. GOAP took existing concepts prominent in the industry and adding influences from planning research circles to create something fresh and exciting.

This work created achieved a legacy within the industry in a relatively short period of time. As summarised in (Champandard, 2013), this technique was then employed in a number of combat-driven games. It has since been used in games such as Just Cause 2, S.T.A.L.K.E.R., Deus Ex: Human Revolution, Fallout 3 and Empire: Total War.

It has also given rise to the adoption of Hierarchical Task Network (HTN) planning: a paradigm driven by academic research that uses macros of behaviour for search and execution. This has since been adopted in titles such as Killzone 2, Transformers: Fall of Cybertron, Dark Souls, Dying Light and Horizon: Zero Dawn.

References

  • (Fikes, Richard E., and Nils J. Nilsson., 1972) STRIPS: A new approach to the application of theorem proving to problem solving. Artificial intelligence 2.3 (1972): 189–208.
  • (Fox, M., and Long, D., 2003) PDDL2. 1: An Extension to PDDL for Expressing Temporal Planning Domains. J. Artif. Intell. Res.(JAIR) 20 (2003): 61–124.
  • (Ghallab, Malik, Nau, Dana S. and Traverso, Paolo, 2004), Automated Planning: Theory and Practice, Morgan Kaufmann, ISBN 1–55860–856–7
  • (Orkin, J., 2004)Symbolic Representation of Game World State: Toward Real-Time Planning in Games. AAAI Challenges in Game AI Workshop 2004.
  • (Orkin, J., 2005) Agent Architecture Considerations for Real-Time Planning in Games. Proceedings of the 2007 conference for Artificial Intelligence and Interactive Digital Entertainment (AIIDE ’05).
  • (Orkin, J., 2006) Three States and a Plan: The AI of F.E.A.R. Proceedings of the 2006 Game Developers Conference (GDC ’06).
  • (Thompson, T. and Levine, J. 2009) Realtime Execution of Automated Plans Using Evolutionary Robotics. IEEE Symposium on Computational Intelligence and Games (CIG), September 2009.

Enjoying AI and Games? Please support my work on Patreon!


Originally published at aiandgames.com on March 2, 2014.