Decoding Altruism

Issac Roy
Cognitive NeuroEconomics @ UCSD
10 min readFeb 26, 2024
Primer: Simulating the Evolution of Teamwork

Humans are strange creatures. Not just because we have language and the ability to use tools, after all, many other animals have been shown to have similar abilities. Dolphins can communicate with each other through clicks and whistles and even crows can use rocks to get food. What makes us truly unique is altruism.

Altruism is the practice of putting the well-being of another organism above oneself. It breaks our economic models about human behavior and is something that not even our closest relatives, the chimpanzees, have in common with us.

Monkeys and the Economy

Most economic models of human behavior are founded on the assumption that humans are rational creatures who seek to maximize their resources. While these models are wildly successful in predicting many aspects of society, such as why we trade, they all fail to account for altruism. After all, it doesn’t make sense for an organism to prioritize another’s well-being when evolution is all about survival of the fittest. Because of this, humanity’s altruistic behavior is truly bizarre from an economic point of view.

Not even other primates or even chimpanzees, who share 99% of our DNA, have this trait in common with us.

Cogs 1 Lecture by Fredrico Rossano

In a study conducted on chimpanzees sharing vs. stealing food, it was found that chimps will steal food from others 88.8% of the time rather than requesting it. Looking at the resistance column, this is strange because the chimps who request find much less resistance (12%) than the ones who steal (38%). Despite the increased chance of resistance, most will still resort to stealing because of the higher success rate, 76.2% vs 46.7%. So now the question is, why don’t humans do the same? After all, we are almost genetically identical to chimps.

Simulating Teamwork

My main inspiration for this article was a simulation of teamwork created by Primer on YouTube, but I felt some things were missing so remade the simulation from scratch and added some extra features such as mutations and “intelligence”.

Primer: Simulating the Evolution of Teamwork

In Primer’s video, he simulated blobs who go out every day, try to find food, and reproduce. A simple, yet accurate depiction of natural selection. The rules for this simulation are as follows:

The Rules:

If a blob arrives at a tree by itself, it gets to eat both fruits on the bottom layer of the tree. If two team blobs (colored in blue), arrive at a tree, they each eat one fruit on the bottom layer, but also work together to shake the tree and get the two fruits at the top layer of the tree which they also share. However, shaking the tree has cost them some energy, so both blobs only get a net of 7/4 fruits.

If two solo blobs (colored in red), arrive at a tree, they will each eat one fruit on the bottom layer, but will also lose some energy because they fight trying to steal each other’s food, so in the end, each blob only gets a 3/4 fruits.

Finally, if a solo blob and a team blob arrive at a tree, the solo blob will quickly eat its share from the bottom layer, but will also steal half of the team blob’s share as well, so the solo blob leaves with 6/4 net fruits and the team blob only gets 2/4 net fruits.

As for reproduction, each blob will produce one child for each whole fruit it ate, and the rest is converted into a probability for another child. For example, if a blob ate 7/4 fruits, it has a 75% chance of having two children and a 25% chance of having one child. For now, each blob’s child will be of the same type as their parent. Following reproduction, the blob dies.

Game Theory:

These rules and parameters can be changed in the future, but what matters is the table that we get from this. In Game Theory, this type of table is called a reward matrix. Our rules for this simulation can be summarized into this reward matrix:

Let’s say that our strategy is on the left of the matrix and our opponent’s is on the top. Then the numbers in the matrix represent our reward for playing a certain strategy against our opponent. In addition, the arrows will indicate the better strategy, given that we know our opponent’s strategy. In Game Theory, we can choose our strategy based on our opponents, but in Evolutionary Game Theory, we cannot as our actions are dictated by genes.

Simulations:

Let’s start with one simulation for 50 days to see how these blobs evolve.

In this simulation, we start with 5 team blobs and 5 solo blobs in a world with 50 trees. As the simulation runs, we can see how there are about 50% team blobs and 50% solo blobs for about 10 days, but after that, the solo blobs start to take over and the team blobs go extinct after only 30 days.

Now let’s try running 10 simulations for 50 days and plot the data to see how the blobs evolve on average. Each line is one simulation and each simulation has a different number of team blobs and solo blobs to start with. In addition, the background colors will indicate the averages over time.

A strategy is called Evolutionarily Stable if it can stay dominant over time. As we see in the simulated graph below, both strategies are Evolutionarily Stable because once they get the lead, each strategy can stay dominant over time. However, there is a turning point where one strategy becomes favored over the other.

An interesting observation from this graph is how solo blobs can completely dominate team blobs much faster as it only takes them a maximum of 25 days to kill the team blobs when the solo blobs have the advantage. On the other hand, when the team blobs have the advantage, they take much longer to kill off the solo blobs. So given that they have the numbers advantage, the solo blobs ruthlessly wipe out the team blobs but the team blobs will try cooperating with the solo blobs even with a numbers advantage.

As for tweaking the reward matrix, Primer goes into depth with different types of equilibriums with different reward matrices so I recommend watching his video for more information on this. This article will cover my findings by introducing different features into the simulation such as mutations and intelligent blobs.

Mutations:

Now let’s introduce mutations to these blobs so that they have some chance of mutating into the other type of blob. For now, let’s set the mutation rate to 1% so each child will have a 1% chance of mutating to the opposite type of blob.

While both strategies are still Evolutionarily Stable, there’s much more noise. In reality, 1% is a relatively high mutation rate, but even still, each strategy can stay Evolutionarily Stable. If you want to play with different mutation rates and know Python, feel free to check out my Github Repo.

Intelligence:

Now let’s shift the simulation from Evolutionary Game Theory where actions are dictated by genetics to having more Game Theory where we get to choose our actions. To do this, I will introduce a new type of green blob which is “intelligent” because it can distinguish between team blobs and solo blobs. This may be because there are some physical differences between the team and solo blobs or maybe the blobs are colorblind but the intelligent blob can pick up on the mannerisms of its opponent and decide its strategy based on that. Ultimately, what matters is that the intelligent blob can choose its actions. Through this green blob, we take one more step towards human behavior.

These green blobs will choose the strategy that is most advantageous to them. For example, if it encounters a team blob, it will choose the team strategy and if it encounters a solo blob, it will play the solo strategy. If the green blob encounters another green blob, let’s say that they’re able to communicate for now, so they’ll both choose the team strategy.

To run the simulation, I will remove the individual simulation lines because they’re starting to clutter the graph, so we’ll just look at the averages of 10 simulations over 100 days. Let’s start with no green blobs and varying amounts of the other two blobs, but let’s set the mutation rate so that there’s a 2% chance of each child switching to a different strategy.

Interestingly, the solo blobs, which would ruthlessly eradicate the team blobs, go close to extinction very quickly. One explanation could be that the intelligent blobs will choose to play the team strategy more often than the solo strategy, so through communication with each other and collaboration with team blobs, they’re able to wipe out solo blobs.

What’s even more interesting though is what happens when we don’t let the intelligent blobs communicate with each other. This causes the intelligent blobs to lose trust in each other and hence, they will resort to the solo strategy when they encounter another intelligent blob. This creates a situation similar to the prisoner’s dilemma and is in a sense more realistic because as humans, we can’t always predict others’ actions, and resorting to solo strategies may be more beneficial.

Even though the intelligent blobs are now playing more solo strategies rather than team strategies, the team blobs are winning out even though nothing else has changed from the original scenario.

Now let’s try modifying the reward matrix so the solo blobs have a chance. We’ll modify the matrix by increasing the cost of teamwork, giving the teamwork strategy what’s called a Weak Nash Equilibrium because there’s no benefit to switching strategies against a team blob. With this small change, our reward matrix looks like this and the simulation changes dramatically.

After running 10 simulations over 100 days, teamwork quickly dies out and the solo blobs and intelligent blobs are the only ones left. Now let’s make the solo strategy into a Weak Nash Equilibrium as well by increasing the cost of selfishness. Now our reward matrix looks like this:

After running 10 simulations over 200 days with both strategies in Weak Nash Equilibriums, all three strategies are doing about equally well on average. This shows just how sensitive the environment is to small changes.

Conclusion

While there are many, many more variations that we can make to this blob environment, I have to wrap up this article eventually so I’ll leave the rest of the tinkering to you. Through this simulation of simple blob creatures, we’ve explored the differences between Evolutionary and Traditional Game Theory as well as the results of adding mutations into the mix.

For the most part, it seems as if both the team and solo strategies were evolutionarily stable, so it’s better to play towards whatever everyone else is doing. The difference lies in the fact that I’ve only been showing the proportions of blob populations so far. If I show the actual population counts when each strategy is dominating, we see something interesting.

Despite each strategy being dominant over time, looking at the y-axis shows the carrying capacity of each strategy. If every blob is selfish, the environment can only support about 80 of them while the same environment can support twice as many team blobs. This reveals that for large functioning societies of blobs, teamwork is the most viable strategy as it will allow for the most reproduction and population.

Humans are very much like these simple blobs in this sense. It isn’t our success as a species that allows us to be altruistic, but rather our altruism is one of the reasons for the success of our species. Altruism isn’t the best choice in most situations or environments, which is why it’s so rare in the animal kingdom. But when it wins, it wins big. Just look at humanity.

Citations:

Primer. (2023, December 16). Simulating the Evolution of Teamwork. Youtube. https://www.youtube.com/watch?v=TZfh8hpJIxo&t=201s

Rossano, F. (2024, February 5). Social Cognition and Social Interaction. https://drive.google.com/file/d/1XkDelOhhKCJ2z9TU5YiHyfEMwnPGVj9J/view?usp=drive_link

Roy, I. (2024). Altruism. https://github.com/TheBoyRoy05/Jupyter/blob/main/COGS%202/Altruism.ipynb

--

--