Command Palette
Search for a command to run...
Cameron Allen; Kavosh Asadi; Melrose Roderick; Abdel-rahman Mohamed; George Konidaris; Michael Littman

Abstract
We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. We prove that this approach reduces variance in the policy gradient estimate relative to traditional actor-critic methods. We show empirical results on two control domains and on six Atari games, where MAC is competitive with state-of-the-art policy search algorithms.
Code Repositories
camall3n/atari-MAC
tf
Mentioned in GitHub
kavosh8/MAC
tf
Mentioned in GitHub
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| atari-games-on-atari-2600-beam-rider | MAC | Score: 6072 |
| atari-games-on-atari-2600-breakout | MAC | Score: 372.7 |
| atari-games-on-atari-2600-pong | MAC | Score: 10.6 |
| atari-games-on-atari-2600-qbert | MAC | Score: 243.4 |
| atari-games-on-atari-2600-seaquest | MAC | Score: 1703.4 |
| atari-games-on-atari-2600-space-invaders | MAC | Score: 1173.1 |
| continuous-control-on-cart-pole-openai-gym | MAC | Score: 178.3 |
| continuous-control-on-lunar-lander-openai-gym | MAC | Score: 163.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.
AI Co-coding
Ready-to-use GPUs
Best Pricing
Hyper Newsletters
Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp