Developing Vehicle AI for Stingray, Part 1

At the Stingray development team, we do hack days every other Friday. Last time I decided to try and learn how to make a basic AI for vehicles in Stingray. When I started in the morning, I had spawned a car that I could just about control using my keyboard. At the end of the day, I had created a basic AI for another car that followed me around. You can see the result here:

(Consider turning down the volume first!)

I did several captures to find a clip that looked nice as the AI would frequently get stuck or not show off what I wanted.

Since then I’ve spent some time improving it, refactoring, adding more behaviors, and today — the hack day two weeks after my first attempt, I’m writing down what I’ve found in case anyone else might find it useful. It’s not done by any means — lots of basic behaviors to write, lots of tweaking to do, and probably a fair amount of refactoring too. That said, I think I have a good core structure to work with, and it should be useful for anyone interested in developing vehicle AI, even if they’re not using Stingray.

Here’s what it currently looks like. Some improvements have been made, though mostly refactorings, so there aren’t that many new features.

In fact, while you can see the debug drawing for avoidance, I’ve temporarily disabled the actual AI behavior, to make sure other things work correctly. I recommend viewing the video with annotations to better understand what’s going on.

A word of caution before you read on! This is very much a learning project for me, so I may very well realize I’m wrong about certain things. I have a fair amount of experience creating games and working on AI but I haven’t done vehicle AI before. Some things I’m downright uncertain about! :)


Implementation

I like to work with Entity Component Systems (also known as ECS). I think it’s a good way to organize your game code, so naturally that’s what I’m using for this project. I’ve written my own custom ECS inspired by what we’ve been using the last couple of years at Pixeldiet to make games.

The two cars (the one I’m controlling and the one controlled by AI) are both entities with different sets of components.

The Vehicle Component

The first component, simply named vehicle, handled by the VehicleSystem, is exactly the same for both entities. It’s a thin wrapper around the Vehicle API in Stingray. The API wants a value [0,1] for acceleration, another value [0,1] for deceleration, and a value [-1,1] for steering left or right. There are more things you can do with a Stingray vehicle, like hand braking, but that’s the basics. The vehicle component looks at the values given to it each frame (following the same ranges for the values), and passes it on to the vehicle.

It will also ensure that if the vehicle needs to go backwards, it applies the reverse gear.

The Driver Component

The VehicleDriverSystem is the only system (aside from the vehicle) that manages a component for my own vehicle. My car has a vehicle_driver_player component whereas the AI car has a vehicle_driver_ai, and they are both managed by the same system.

vehicle_driver_player is quite straight forward. It simply looks at the keyboard input and forwards it to the vehicle component.

As you can see, the code is written in Lua, Stingray’s main scripting language. My naming convention for entity components is to name them vehicle_ext, where ext stands for extension, a word I occasionally use instead of component.

With AI, it becomes trickier, of course. How do you write a driver that can turn the wheel and put the pedal to the metal?

Well, as it has turned out, the AI entity consists of quite a few components, each handling a specific part of the vehicle’s AI logic.

So here’s where we are right now:

The driver component takes a wanted direction and a wanted speed and tries to convert that to the stuff the vehicle wants. To do this, it does a few fairly simple math operations, such as the dot product of the wanted direction and the vehicle’s right vector, to figure out if it should turn left or right, and if it should accelerate or decelerate.

Currently I only have a simple P-controller for controlling the speed — meaning that if the current speed is faster than the wanted speed, the driver breaks by a certain amount, and if the speed is slower, then it speeds up by a certain amount. The idea is to convert this to a proper PID-controller at some point, but for now this works well enough.

Here is a great article on PID controllers for vehicles, available for free on www.gameaipro.com: Racing Vehicle Control Systems using PID Controllers

By the way: I just realized the entities actually have two more components, though they’re not specifically part of the AI pipeline, but for completeness sake:

  • A unit component that wraps a Stingray unit (the engine’s concept of a game object, soon to be fully replaced by a new entity system in the engine). This is the car, the model, the physics, etc.
  • A transform component that allows easy access to common things you might want from the entity, like position, rotation, forward vector, etc.

ANYWAY! That’s pretty much it for the driver. But what provides it with the data it wants?

The Steering Component

The steering component, managed by the VehicleSteeringSystem, is the next part of the chain. It’s so named because steering is a well known concept in AI. There are lots of good resources on it on the web and in books. Here is one: Understanding Steering Behaviors. If you know of more, please write a comment. :)

Implementing steering for a “real”, physics based, 3d vehicle is a bit more tricky than it is for the simple 2d examples usually provided. Most importantly, you don’t have direct input over the result. You can’t simply come up with a force vector, directly apply that on your vehicle, and modify it’s position accordingly. Instead it has to be fed to the driver, who has to step on a pedal, who feeds an engine that drives something with a good amount of inertia.

So, the idea with Steering is that you can easily combine multiple behaviors and get a good resulting vector out. The most classic and well known implementation is probably boids, the bird simulation from the 1980's.

Source: http://procedural-generation.tumblr.com/post/129249685288/boids-at-siggraph-87-craig-reynolds-demonstrated

For example, you can have one steering behavior called seek that returns a vector to some target position, and one called avoid that returns a vector pointing away from the nearest obstacle (scaled after the distance to the obstacle).

Steering: Something like this.

In theory, it’s a great idea. In practice, it’s… still good! But it’s not entirely easy to get right, and it’s not just something you can add and then expect your car to easily handle getting to the correct place while also not bumping into anything along the way.

So how does it work, naively? Well, you loop over all of the current steering behaviors (I’ll go through how you set up which ones are current later), each returning a vector that you sum up. When done, you check if it’s magnitude is larger than the vehicle’s maximum speed, and if so, you truncate it. Then it’s fed to the driver component.

A problem with that is that, say your seek behavior always returns (direction_to_target * vehicle_ext.max_speed), because that’s its job, right? Then it’ll always be worth at least as much as the avoid behavior, unless you allow the avoid behavior to return huge vectors to compensate. That’s not a good route to go, though.

(Actually, the seek behavior is slightly more complex than that, but you get the idea.)

One thing you can do is to weigh different behaviors, so that the vector returned by avoid is scaled up. I think that’s probably reasonable to have as a variable to tweak, but even though I hadn’t had that much time to play with that yet, my experiments and gut feeling tells me it’s better to keep those weights to values not that far from 1.

One technique that I think makes a lot of sense is something I snapped up from reading Programming Game AI By Example by Mat Buckland. Prioritized steering. It’s simple enough that it should be obvious, but I’m sure it would’ve taken me a long time to figure it out on my own.

You sort the behaviors in order of which one is most important. Then you run them in order, until you’ve either run all of them, or the vector is larger than the vehicle’s max speed, in which case stop and truncate the last vector before adding it on. How to decide which are more important than others is probably something you have to test, but it’s not hard to realize that avoiding an obstacle is more important than getting to your target on time.

Here’s a rundown:

Of course you can both weigh and prioritize — I do. Note that the seek and avoid vectors would generally be pointing in different directions. :) And, just to be clear, the avoid behavior will return the zero vector whenever there are no obstacles, giving full power to the seek behavior, which is what you want.

So, we now have a layer that provides the driver with the information on where to go. But how do we control the steering? How do we feed it data? For the answer that, we go to Behavior Trees (BTs).

The Behavior Component

All of the previous components have been pretty dumb. They take some input, operate on it, and pass it on, but they don’t really know why their doing it or in which context it’s happening.

The behavior component is in some sense the real brain of the AI. (Funny that it’s two layers away from the driver…)

There are tons of resources available on what a Behaviour Tree is, but it boils down to a structured way of writing a long and complicated if-then statement. The behavior component consists of one or more BTs.

We used them on successfully on Warhammer: End times — Vermintide (which also uses Stingray!) and while it worked well, we did have some problems with it. I’m attempting to use BTs in a way that hopefully lets me avoid some of those issues.

Multiple behavior trees

So one of the problems we had was that you had a rat (Vermintide is all about the rats!) that would have a behavior that took some time to complete, for example skulk around on the rooftops for a while before finding a lone player to jump on. We had another behavior in the tree that we switched to when the rat found itself next to a wall that it wanted to jump up on. The problem then became… How does the rat’s behavior tree “find its way back” to the skulking behavior? We did solve it in the end, I believe by storing some state so that the next time the tree was evaluated, it ended up in the old skulk node. But then that node would get it’s enter() function called and that function had to know if it was getting entered for the first time or re-entered due to being temporarily cancelled. It wasn’t a super nice solution.

I realized too late that what we should have done was to separate movement from more “intelligent” behavior. We have the same problem with vehicles. Currently, my AI car wants to follow my own car in a formation. But if it happens to drive too fast and ends up driving straight into a wall, it needs to reverse, turn around, and then continue with the following. In this case, it makes sense that it is both in a formation behavior and in a, as I’ve called it, reverse_from_wall_180 behavior at the same time. The formation behavior doesn’t need to know, and in fact, shouldn’t care that the vehicle had to take a small detour.

Here’s what my definition currently looks like (this is one of the most WIP areas of the code as I tend to move things around to figure out how it should work).

So what I’ve done is I have two BTs running at the same time. One for movement and one for the formation logic.

Some things to note:

  • The trees are currently simple lists, so no nested things, no selectors (except the root, I suppose), no sequences, etc. Whether or not I make it more complex/complicated later on, we’ll see, but for now I’m keeping it simple and it seems like it should work well.
  • I think that when I want the vehicle to do many different kinds of things, like combat or racing, I’ll have that as a state machine of some kind in a layer on top, and switch out the formation behavior. I think it makes more sense than having a complicated tree but.. we’ll see!
  • The first part of each element is the name/type of behavior. The second is the list of conditions that must be true to switch to that behavior. The third is an optional piece of data that can be used to configure the behavior.

So the formation BT basically says:

  • Do I have a target to follow, and is it closer than 40 meters? If yes, use behavior keep_offset. If not, go on to the next one…
  • Do I not have a path? OR I do have one but the target has moved? OR I do have one but I am far from it? Then calculate the path to the target. If not…
  • Try to get close until the distance is smaller than 30 meters.
  • Idle will never trigger such as the BT is shown there.

BT evaluation

Another thing that I’m changing from how it used to work at Vermintide, is that I’m only evaluating the tree when something changes. If nothing changes, I just run the update function of the tree’s current behavior. That is mainly for performance reasons — why run through code all the time if you don’t really have to?

The purpose of each behavior comes down to this:

  • Set up steering and push appropriate data, such as “Seek to this position”. The follow_path behavior does exactly that, and ensures that when it gets close enough to the current target position in the path, it starts seeking to the next one. It also feeds avoidance data to the avoid steering behavior.
  • Check if there’s a reason to re-evaluate the component’s BTs.
The “keep_offset” behavior.

Currently there is no inherent order between the movement and formation tree, though I’m leaning more and more to have a clear separation and ensure that formation can affect movement but not the other way around. I think it’ll make me saner, when thinking about how things work and in which order things happen.


So is this it?

Nope. There’s also the Perception component, the Blackboard component, and the Navigation component. By then it’s starting to look a little bit more complicated… But I’m going to leave that for part 2!

Hope you made it all the way through! Feel free to ask questions.

/Anders

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.