Atari Games On Atari 2600 Pitfall

评估指标

Score

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
Go-Explore102571Go-Explore: a New Approach for Hard-Exploration Problems
Agent5718756.01Agent57: Outperforming the Atari Human Benchmark
Go-Explore6954First return, then explore
NoisyNet-Dueling0Noisy Networks for Exploration
POP3D0Policy Optimization With Penalized Point Probability Distance: An Alternative To Proximal Policy Optimization
QR-DQN-10Distributional Reinforcement Learning with Quantile Regression
IQN0Implicit Quantile Networks for Distributional Reinforcement Learning
DNA0DNA: Proximal Policy Optimization with a Dual Network Architecture
Advantage Learning0Increasing the Action Gap: New Operators for Reinforcement Learning
MuZero (Res2 Adam)0Online and Offline Reinforcement Learning by Planning with a Learned Model
SND-V0Self-supervised network distillation: an effective approach to exploration in sparse reward environments
MuZero0.00Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
SND-VIC0Self-supervised network distillation: an effective approach to exploration in sparse reward environments
DreamerV20Mastering Atari with Discrete World Models
CGP0Evolving simple programs for playing Atari games
ASL DDQN0Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
R2D20.0Recurrent Experience Replay in Distributed Reinforcement Learning-
GDI-I30GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning-
GDI-I30Generalized Data Distribution Iteration-
Ape-X-0.6Distributed Prioritized Experience Replay
0 of 23 row(s) selected.
Atari Games On Atari 2600 Pitfall | SOTA | HyperAI超神经