What does an agent-based model look like?
Agent-based models are simulations of agents (individuals or collectives) in environments (real and imagined). Agents are programmed to autonomously interact with each other, and/or the environment. The goal of agent-based models is typically to assess the effects of agents on complex systems as a whole, and detect resultant emergent features.
Agent-based models are built ground-up, rather than top-down. Individual features require creating, representing agents found in the real-world, the properties that describe them, and entities present in environments. Rules must then be attached describing how agents and the environment interact, based on the properties present.
An example ABM build
One’s conception of a system can be simplistic to begin with, and iteratively improved over time as more resources become available, or understanding of a system deepens.
First Attempt
A quick first-attempt at a stripped-down ‘dumb’ predator/prey model might look something like this:
Environment: a flat surface of 100x100 meters squared.
Agents: 10 sheep, and 2 wolves are randomly placed in the environment.
Properties: these agents have four properties each.
- Alive: a boolean (yes/no) value, which simply records whether a sheep or wolf is currently alive or not. If yes, the animal appears in the environment. If no, in our simple model it will simply disappear.
- Direction: records the direction the animal is currently facing. A random value between 0° and 360°.
- Speed: let’s say sheep are always moving forwards at 0.5 meters per second, and wolves at 1 meter per second.
- Distance: animals will move in a set Direction at the set Speed for a distance or duration set here (e.g. 10m, or 10 seconds) before readjusting their course.
Rules: in ABMs there are generally lots of rules governing the environment, and agents’ interactions in it. To keep things simple, we’ll set just a few basic rules in our initial model.
- Only agents whose Alive tag is ‘Yes’ will appear in our environment.
- When a wolf comes into contact (<2m) with a sheep, it ‘eats’ that sheep. In our very basic system, all this means is that the sheep will cease to exist. This is an example of a rule which describes what happens when two different agent-types collide.
- Animals of all types will move in Direction at Speed for Distance, following completion of which a new Direction will be randomly chosen.
Improving our model
Our quick’n’dirty first attempt isn’t bad, but it’s missing some key things.
What do sheep eat? Why don’t sheep and wolves reproduce? Does anybody even care about staying alive, or eating? Spoiler alert: they don’t, they can’t, and no. Clearly our model needs some work.
As our understanding of the model’s features and dynamics improve, and further resources become available, we can update the model to better reflect the real-world.
Let’s start by adding a new property, Satiation, and imagining that it is measured on a 0–50 scale, with all animals starting on 25 points.
We’ll also introduce Grass into our environment, so sheep can eat, too.
We need lots more rules:
- When a sheep is eaten by a wolf, that wolf will receive 15 points of Satiation.
- When sheep get within <2m of grass, they will consume it, and also receive 15 points of satiation.
- Every second a wolf or sheep does not eat, their Satiation decreases by 1.
- There will be 500 units of grass in our 100x100 meter sq. area.
- Grass should regrow over time. Grass will re-spawn in the same spot 30 seconds after consumption. Note: because grass is nonreactive (besides being depletable), and has no agency, it is considered part of the environment, and not an agent.
- Agents should die if they don’t eat. If any agent’s Satiation decreases to zero, they die.
- Agents should always seek to maximise their satiation by hunting out the nearest source of food. As such, Direction will no longer be random, but determined by the location of the nearest food source, and continuously adjusted (rather than calculated once every 10 seconds).
- Upon immediately eating a sheep, let’s say a wolf feels a bit bloated. Let’s introduce another rule which places a constraint on the wolves’ behaviour, and prevents them from eating any more sheep within 5 seconds of previously eating. This gives wolves time to digest their food — and helps the poor sheep out a bit.
Things are getting pretty complicated now, but our model still looks nothing like reality.
Further modifications
We’ve solved some problems, and introduced a few new ones. Welcome to Agent-Based Modelling!
Animals now have an incentive to eat, but seem to have unlimited line-of-sight allowing them to hone in on sheep and grass way too far away. And sheep still possess no particular desire to avoid being eaten (which honestly you’d think they might). Poor little bleeters. Oh, and *still* nobody’s having sex. It’s all terrible.
Clearly in the real-world nobody has perfect vision over infinite distance, sheep would want to avoid wolves, and everybody would be having sex with agents of their own type. Let’s assume there’s also a farmer with a shotgun, patrolling the farm, who can shoot the wolves if he spots them (introducing a third agent-type).
So what new rules do we need?
- Let’s give all agents (sheep, wolves, farmer) a line-of-sight of 25m (180° in the Direction they are facing).
- Now let’s remove the borders on our model, and make the area covered unbounded.
- Let’s bestow the sheep with a desire to stay alive, but get clever about it. Sheep will now run away from wolves when they get too close (<10m).
- In practice, running away from a wolf that gets too close should mean automatically adjusting Direction to move in the opposite direction.
- We might introduce additional properties such as Hearing, which when active allows sheep to detect wolves approaching them 180° to their rear. But let’s make it probabilistic, on the assumption that sheep might sometimes not be paying close attention. A sheep’s Hearing will only be active 50% of the time.
- We need a rule that explains how agents are able to reproduce. Let’s say that if two agents of the same type spend at least 15 seconds within 10-meters of each other, they will reproduce, creating another agent of the same type.
We might also modify earlier assumptions as models become more complex.
- Agents of the same type are not all necessarily homogenous: we have talked about ‘agent types’, which assumes all agents within a type have the same properties and are subject to the same rules. In practice, not all wolves and sheep move at the same pace, and there’s definitely more than one gender sheep and wolf required for copulation. Our agents need more properties. New properties like Gender will have implications for reproduction. Gay sheep are great, and gay giraffes might be the norm, but same-sex couples can’t (yet) reproduce, so we shouldn’t assume that all agents of the same type are homogenous and can.
- You might also want to say that newborn animals can’t reproduce, introducing a MinimumAge property governing how old they have to be before engaging in nookie. This is not a moral consideration (although one could argue it should be), but a practical one reflecting how the world really works.
- Interactions between agents may not be guaranteed: the chance of two sheep mating may not be 100%. That sheep might not actually be straight, and Ms Wolf may not think Mr Wolf is very attractive.
- Property values can be made dynamic and conditional on each other. Properties such as Speed may be made conditional on Satiation. The lower a sheep or wolf’s Satiation, the slower it might move.
- Sheep may gain an ability to ‘bleet’ (make cute but panicked sheep-noises) and warn other sheep of an approaching predator, even when those other sheep may lack line-of-sight (so long as they are within sufficient distance of the bleeting sheep to hear).
- Rules of behaviour may be learned: sheep might not run away from wolves if they do not know they pose a threat. Sheep may only run away from wolves once they have seen a wolf attack a sheep, once (or multiple times… in the case of a sheep with low-Intelligence — a new property which could govern lots of things).
How ABMs Grow
New rules and assumptions can be combined, additional agent-types can be further introduced to the model, and environments may be made more realistic. Boundaries and borders of environments might be expanded, or eliminated entirely.
Real-world models tend to combine many more agent-types, with many more individual properties, and much more complex environments. It is important to remember that all models are abstractions. Another word exists to describe an ability to recreate models entirely perfectly, in all their complexity: cloning.
As outlined above, the sheer number of considerations for even relatively simple problems such as that of a two-species predator/prey model can quickly become complicated.
How do I actually create an ABM?
A wide range of tools exist for building agent-based models. Software like AgentSheets tries to make the experience of ABMing as easy as creating an Excel spreadsheet. More powerful tools like NetLogo allow for more impressive simulations, but with a steeper learning curve, and without much of an ability to collaborate with others. And whilst custom-coding simulations allows for the greatest flexibility and power of all, it comes with the highest barriers-to-entry, and often results in work being silo’d, inoperable with other models.
At SOHO, we’re building HASH, a new type of ABM platform that aims for the best of all worlds, and promises to allow real-time modelling and simulation of phenomena as they occur (in a way that no existing solutions can).
To register an interest, find out more, and get access to the free Beta, visit hash.ai
We’re also growing our team developing the software, so if you’re interested, check out our current openings.