Developing Vehicle AI for Stingray, Part 2
Towards the end of the previous part— found here — you may have been thinking: “Alright, so we have a bunch of layers that think and do stuff… but how does it know what to base the logic on?”
And then it ended with this graphy thing:
In this part we’ll dig into those three grey (gray? I can never remember!) boxes.
The Perception Component
The perception component is the “first” of the AI components to get an update in the frame. Its job is to analyse the situation so that the other components can make informed decisions. My perception component is currently pretty simplistic — it does a disk-cast on the navmesh in front of the car, and if it hits something within 20 or so meters, it does a bunch of more raycasts to figure out where the obstacle is and, more importantly, where it isn’t.
In this example, it’s figured out that there’s something ahead of the vehicle, and that there are two places on both sides of it that are “clear” in the direction it’s going in. The purple X marks the vehicle’s current destination — where it’s trying to get to.
Now, I won’t go into too many of the specific problems I’ve encountered in this post — I’ll save it for later, but just to give you an idea of the kind of problems that has come up for me so far.
To figure out if the car should go left or right, we need to know two things: What’s our destination, and how soon will we hit that wall?
If we’re really close to the wall, it’s time to hit the breaks and turn wildly to either side. I’m not sure if skidding and hitting the wall sideways is actually more effective in terms of making the impact milder, but I’m pretty sure it looks a lot cooler!
If we’re slightly further away, a second perhaps, then it probably makes sense to turn left, towards the nearest blue arrow, so that we have a chance of missing the wall and still keep some of the vehicle’s speed and momentum.
If we’re even further away and just noticed that hey, there’s wall up ahead, probably want to correct the course, but there’s no real risk of hitting the wall, then we should probably turn to the right. Because that’s the direction we need to go in, in order to get to the large X — our destination. Just make sure that we don’t hit the extrusion in the wall to the right — if we think we might, it’s probably better to turn left anyway and then turn around towards the right. Maybe.
Writing the logic to do this is.. you know, not SUPER hard — I think I’m nearly there. But tweaking it and calculating when you need to turn and knowing how much you can turn before you skid… that’s I think a harder problem, one I’ll hopefully revisit in another post.
Additional Perception Tasks
In addition to those obstacle raycasts, the perception component also checks to see if there’s a straight line to the target, and stores the last known position of the target since we last did a path finding calculation to it.
In the future I imagine that the component will be less “hard coded”, so that different behaviors can specify what kind of perception checks they want done.
Hard Things in Computer Science #2: Naming Things
Sidenote: I have an internal debate. I’m not sure if perception is the right name for this component. I’m considering awareness as an alternative, it’s a bit more… vague? A bit more open. Strictly speaking, perception, as I see it, collects data based on whatever senses are available (for a regular AI it could be hearing and sight), whereas awareness could also include things like keeping track of various things. A thing that keeps track of knowledge rather than just reporting “This is what I’m seeing right now.”. Time will tell.
The Blackboard Component
The blackboard is simply storage for the various components. Perception, behavior, and steering can all read and write from it. Perception mostly (only?) writes to it, and steering mostly (only?) reads from it. It doesn’t do anything clever — it’s a simple lua table with no functions whatsoever.
You might ask, as I have done myself, why have a generic blackboard? It seems like we have this really sweet pipeline, where perception can feed into behavior, behaviors feed steering, etc.
Why doesn’t the perception component put stuff in the behavior component that only it can access?
Partly because I’m lazy. It frees me up, mentally, to not have to think about exactly where I store things and what can access it. While writing the AI, I’ve had several times where I’ve gone back and forth between if steering needs to access a particular value, or if a behavior does it, or both. To switch where I store things and read from is currently low on my list of things I want to do. I’d rather be able to refactor that kind of thing without also having to think about organizing my data.
It’s a similar reason to why I only have a few unit tests at the moment. The design of the components and their interactions is so much in flux that, had I started by writing tests for them, I would have had to rewrite them several times over.
Once I feel that my foundation of behaviors and steering logic is fairly solid, it’ll be quite easy to refactor it, should I want to.
The Navigation Component
Autodesk Navigation is a part of Stingray that exposes various core AI functionality. I use three things from it currently:
- Navigation mesh generation: Navigation can take basically any level and figure out where there’s ground. It creates one or more navmeshes for you, which you can then use to do…
- A-star queries: Can I go from here to there, and if so, what’s the closest path?
- Raycasts and diskcasts: To figure out if there’s something in front of the vehicle or if there’s something between the vehicle and the target.
So the navigation component, naturally, wraps Navigation. Well, kind of naturally. I might refactor it at some point — I’m not sure it makes sense to have as a component. But the functionality will largely stay the same.
Navigation has a bunch of other features as well that I definitely intend to use. For example, smart objects:
Right now, the AI does not know that it can jump over like that. A gap in the navmesh like that looks like the navmesh ends to the AI. For Vermintide, I wrote a tool that analyzed the levels and figured out where the rats could jump up and down. I think I could do something similar for vehicles. Calculate where there’s a jump, and calculate how fast you have to go off it in order to make it to the other side. Smart objects allow A-star searches to know that there’s a way to go from one part of the navmesh to the other.
In order to do this, I’ll need to stop using A-star searches directly and instead use Navigation’s “navbots”, because that is currently the only way to use smart objects. There is no way to ask an A-star search “Do you pass over any smart objects, and if so, where?”. I feel like their navbots aren’t perfectly suited for vehicle-style AIs so I’m kind of holding off on this for now.
One thing that’s a bit tricky, that I have yet to solve nicely, is to make the vehicle turn around when it’s target is behind it.
Of course figuring out the reasonable path isn’t always trivial, there could be obstacles in the way, or the vehicle could be in a one-way tunnel that the AI has to drive out of before turning back. That sort of situation can probably be avoided by level design, thankfully.
This concludes the second part of this introductory series. From now on, I’ll write more about specific subjects I encounter as I work on this project. :) Since I wrote the last post I’ve already realized I need to refactor my behavior trees, so…