Thoughts on Agents

Adam Elkus
Strategies of the Artificial
1 min readDec 12, 2015

For a while, I’ve struggled to really understand why the idea of “agent” is so prominent in computational social science, artificial intelligence, software engineering, philosophy, and cognitive science despite many of these disciplines lacking any consistent definition of it. A particular passage from a Nils Nilsson paper made me really “get it” in a way that I didn’t before:

Habile systems will be agents that have built-in high-level goals (much like the drives) of animals. They will have an architecture that mediates between reasoning (using their commonsense knowledge) and reflexive reactions to urgent situations.

The payoff from the agent view is basically the capacity to have both reactive and proactive behavior, and to learn from experience. It reflects a basic understanding that any intelligent system is going to be embodied in the world, and function much like an animal would in terms of the way that goals/drives control reasoning and state-based reactions. If anything, the most fascinating thing about this understanding is how neutral it is to basic distinctions in how this basic control structure is implemented. In contrast, the nature and organization of the components occupies a lot of the time and energy of researchers, and there is, for reasons that Alan Newell stated quite brilliantly in his “20 Questions with Nature” paper, little prospect for agreement. It represents the triumph of the animal behavior view to computing, which I find very interesting.

--

--

Adam Elkus
Strategies of the Artificial

PhD student in Computational Social Science. Fellow at New America Foundation (all content my own). Strategy, simulation, agents. Aspiring cyborg scientist.