Autonomous Learning Library Simplifies Intelligent Agent Creation

Synced
SyncedReview
Published in
4 min readJan 22, 2020

Watching today’s human-destroying intelligent agents playing complex video games can be fun — but creating one is a different story. Building an effective intelligent agent requires setting a mass of hyperparameters to shape the environment, establish the rewards, and so on. A group of researchers from the University of Massachusetts Amherst have attempted to simplify the process with their new Autonomous Learning Library project.

The Autonomous Learning Library is a deep reinforcement learning (DRL) library for PyTorch that streamlines the building and evaluation of novel reinforcement learning agents. One of the stated core philosophies of the initiative is that the reinforcement learning (RL) should be agent-based, meaning the models simply accept a state and a reward and then return an action.

Canonical agent-environment feedback loop

The Autonomous Learning Library separates the control loop from the agent logic to simplify both agent implementation and the control loop itself, increasing flexibility in the way agents can be used. In this case, the project allows an agent’s action to be determined by the control loop, enabling the agent interface and implementation to be extremely concise.

Autonomous learning library agent interface
DQN implementation in the Autonomous Learning Library

The Autonomous Learning Library divides RL agents into two distinct modules: “all.agents” and “all.presets”. The “all.agents” module contains implementations for common algorithms such as Rainbow, A2C, Vanilla, etc.; while “all.presets” provides specific examples of these agents adjusted under particular environments such as Atari games, classic control tasks, and so on.

Benchmark results for RL agents in Atari game environments

The project also highlights the function approximation module as one of its central abstractions. By building agents that rely on the approximation abstraction rather than directly interfacing with the PyTorch Module and Optimizer objects, users can add to or modify the functionality of an agent without altering its source code (known as the “Open-Closed Principle”). This enables the agent implementation to focus on defining the RL algorithm by itself.

The researchers also made a sample implementation to demonstrate the utility of the Autonomous Learning Library in developing new agents not included in the original library. Although the results do not make the agent look particularly smart, they do prove the practicability of the library.

Result of a sample demonstration using the Autonomous Learning Library to build new RL agents.

The Autonomous Learning Library project has been shared by Christopher Nota, a PhD student in Reinforcement Learning at the University of Massachusetts Amherst. Additional information is available on the project Github.

Author: Victor Lu | Editor: Michael Sarazen

Thinking of contributing to Synced Review? Sharing My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global