Stanford U & Google’s Generative Agents Produce Believable Proxies of Human Behaviours
The quality and fluency of AI bots’ natural language generation are unquestionable, but how well can such agents mimic other human behaviours? Researchers and practitioners have long considered the creation of a sort of sandbox society populated by agents with human behaviours as an approach for gaining insights into various interactions, interpersonal relationships, social theories, and more — and we may be getting there.
In the new paper Generative Agents: Interactive Simulacra of Human Behavior, a team from Stanford University and Google Research presents agents that draw on generative models to simulate both individual and emergent group behaviours that are humanlike and based on their identities, changing experiences, and environment.
The team summarizes their main contributions as follows:
- Generative agents, believable simulacra of human behaviour that are dynamically conditioned on agents’ changing experiences and environment.
- A novel architecture that makes it possible for generative agents to remember, retrieve, reflect, interact with other…