Google Open Sourced this Architecture for Massively Scalable Reinforcement Learning Models

The new architecture improves upon the IMPALA model to achieve massive levels of scalability.

Jesus Rodriguez
DataSeries

--

Source: https://morningpicker.com/business/googles-seed-rl-achieves-80x-speedup-of-reinforcement-learning-73788/

I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Deep reinforcement learning(DRL) is one of the fastest areas of research in the deep learning space. Responsible for some of the top milestones in the recent years of AI such as AlphaGo, Dota2 Five or Alpha Star, DRL seems to be the discipline that approximates human intelligence the closest. However, despite all the progress, the real world implementations of DRL methods remain constrained to the big artificial intelligence(AI) labs. This is partially due to the fact that DRL architecture rely of disproportionally large amounts of training which makes…

--

--

Jesus Rodriguez
DataSeries

CEO of IntoTheBlock, President of Faktory, President of NeuralFabric and founder of The Sequence , Lecturer at Columbia University, Wharton, Angel Investor...