HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction

Aviral Kumar Abhishek Gupta Sergey Levine

DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction

Abstract

Deep reinforcement learning can learn effective policies for a wide range of tasks, but is notoriously difficult to use due to instability and sensitivity to hyperparameters. The reasons for this remain unclear. When using standard supervised methods (e.g., for bandits), on-policy data collection provides "hard negatives" that correct the model in precisely those states and actions that the policy is likely to visit. We call this phenomenon "corrective feedback." We show that bootstrapping-based Q-learning algorithms do not necessarily benefit from this corrective feedback, and training on the experience collected by the algorithm is not sufficient to correct errors in the Q-function. In fact, Q-learning and related methods can exhibit pathological interactions between the distribution of experience collected by the agent and the policy induced by training on that experience, leading to potential instability, sub-optimal convergence, and poor results when learning from noisy, sparse or delayed rewards. We demonstrate the existence of this problem, both theoretically and empirically. We then show that a specific correction to the data distribution can mitigate this issue. Based on these observations, we propose a new algorithm, DisCor, which computes an approximation to this optimal distribution and uses it to re-weight the transitions used for training, resulting in substantial improvements in a range of challenging RL settings, such as multi-task learning and learning from noisy reward signals. Blog post presenting a summary of this work is available at: https://bair.berkeley.edu/blog/2020/03/16/discor/.

Code Repositories

ku2482/rljax
jax
Mentioned in GitHub
AIDefender/MyDiscor
pytorch
Mentioned in GitHub
toshikwa/discor.pytorch
pytorch
Mentioned in GitHub
ku2482/discor.pytorch
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
meta-learning-on-mt50DisCor
Average Success Rate: 26%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp