Multi-agent systems

Daniel Phan
2 min readJun 24, 2015

--

In January, I attended a workshop run by the Center for Applied Rationality. The stated goal of these workshops is to help individuals become more rational. In this sense, being more rational does not mean becoming Spock-like or emotionless, but simply getting better at achieving your goals.

To get the most out of the workshop, one of CFAR’s recommendations was to steal ideas. Not (quite) literally intellectual property theft, but rather taking an idea, integrating it into your own network of ideas, retelling it in your own words — making it yours.

Something I stole from the workshop was a concept of modelling my decision-making processes as the interaction of different agents. Agents, in this sense, can be thought of as players in a game, each with their own (not necessarily competing) goals, motivations, and strategies. For example, when woken up by an alarm clock in the morning, there might be a few different agents:

  • one that wants to sleep more
  • one that wants to go climbing
  • one that wants to be the kind of person that wakes up early
  • one that doesn’t want to be the kind of person that wakes up early using self-guilt
  • etc.

So the night before, I might have listened to the one that wants to go climbing, and set the alarm for early, but when I wake up, I might listen to the one that wants to sleep more, and snooze away the morning. And this behavior, setting an alarm early just to snooze through it, is irrational, since it is ineffective at optimizing towards either goal: I didn’t go climbing and the alarm kept me from getting high-quality sleep.

My alternative explanation of what CFAR is trying to do through their rationality workshops is to provide tools and advice for how to align these different agents so that they work together, instead of against each other. One way might be to eliminate some agents altogether (e.g. convince myself that sleeping in is not important to me). Another way might be to create an environment where agents do not need to compete (e.g. plan to climb at the end of the day).

And of course, it’s not always obvious, even to ourselves, what truly motivates us. For example, I might tell myself that the only reason I’m trying to lose weight is to be healthy, but I would find it much harder to be on a diet if it didn’t make me more attractive as well. Tools for finding and delineating these agents are also something we covered during the workshop.

Finally, I’m left with the vague feeling that this agent-alignment model generalizes to even larger systems than just the ones encased in the skulls of individuals. Individuals might be formed from systems of motivations. Teams are formed from systems of individuals. Organizations are formed from systems of teams, economies from organizations, and so on, creating huge fractals of interacting agents.

So you might call agent-alignment inside an individual the practice of rationality. In which case, it seems appropriate to call agent-alignment inside of a team or organization the practice of management.

--

--