Published in


Autonomous Learning Library Simplifies Intelligent Agent Creation

Watching today’s human-destroying intelligent agents playing complex video games can be fun — but creating one is a different story. Building an effective intelligent agent requires setting a mass of hyperparameters to shape the environment, establish the rewards, and so on. A group of researchers from the University of Massachusetts Amherst have attempted to simplify the process with their new Autonomous Learning Library project.

The Autonomous Learning Library is a deep reinforcement learning (DRL) library for PyTorch that streamlines the building and evaluation of novel reinforcement learning agents. One of the stated core philosophies of the initiative is that the reinforcement learning (RL) should be agent-based, meaning the models simply accept a state and a reward and then return an action.

Canonical agent-environment feedback loop

The Autonomous Learning Library separates the control loop from the agent logic to simplify both agent implementation and the control loop itself, increasing flexibility in the way agents can be used. In this case, the project allows an agent’s action to be determined by the control loop, enabling the agent interface and implementation to be extremely concise.

Autonomous learning library agent interface
DQN implementation in the Autonomous Learning Library

The Autonomous Learning Library divides RL agents into two distinct modules: “all.agents” and “all.presets”. The “all.agents” module contains implementations for common algorithms such as Rainbow, A2C, Vanilla, etc.; while “all.presets” provides specific examples of these agents adjusted under particular environments such as Atari games, classic control tasks, and so on.

Benchmark results for RL agents in Atari game environments

The project also highlights the function approximation module as one of its central abstractions. By building agents that rely on the approximation abstraction rather than directly interfacing with the PyTorch Module and Optimizer objects, users can add to or modify the functionality of an agent without altering its source code (known as the “Open-Closed Principle”). This enables the agent implementation to focus on defining the RL algorithm by itself.

The researchers also made a sample implementation to demonstrate the utility of the Autonomous Learning Library in developing new agents not included in the original library. Although the results do not make the agent look particularly smart, they do prove the practicability of the library.

Result of a sample demonstration using the Autonomous Learning Library to build new RL agents.

The Autonomous Learning Library project has been shared by Christopher Nota, a PhD student in Reinforcement Learning at the University of Massachusetts Amherst. Additional information is available on the project Github.

Author: Victor Lu | Editor: Michael Sarazen

Thinking of contributing to Synced Review? Sharing My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.




We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Recommended from Medium

Learn AI to Give Your Career an Overhaul

Is Cognitive Robotic Process Automation A Game-changer?

AI, as shown by Curation Zone in the world of filmmaker discovery, can turn noise into very…

Technology Commands The Terms. How Hedge Funds Are Adapting to Modern Realities

Quantum AI And Quantum Brain — The Evolution Of Future Tech

What is Cognitive Computing? How are Enterprises benefitting from Cognitive Technology?

Cornell & NTT’s Physical Neural Networks: a “Radical Alternative for Implementing Deep Neural…

Imitation Art with Neural Networks

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


AI Technology & Industry Review — | Newsletter: | Share My Research | Twitter: @Synced_Global

More from Medium

DeepMind Trains AI Agents Capable of Robust Real-time Cultural Transmission Without Human Data

Inside Meta’s New Architecture for Build AI Agents that Can Reason Like Humans and Animals

DeepMind and OpenAI Ideas to Incorporate Human Feedback in Reinforcement Learning Agents

Princeton U’s DataMUX Enables DNNs to Simultaneously and Accurately Process up to 40 Input…