HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
全站搜索…
⌘
K
首页
SOTA
Atari 游戏
Atari Games On Atari 2600 Robotank
Atari Games On Atari 2600 Robotank
评估指标
Score
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Score
Paper Title
Repository
MuZero
131.13
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Agent57
127.32
Agent57: Outperforming the Atari Human Benchmark
GDI-H3
113.4
Generalized Data Distribution Iteration
-
GDI-I3
108.2
Generalized Data Distribution Iteration
-
GDI-I3
108.2
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
-
MuZero (Res2 Adam)
100.59
Online and Offline Reinforcement Learning by Planning with a Learned Model
R2D2
100.4
Recurrent Experience Replay in Distributed Reinforcement Learning
-
DreamerV2
78
Mastering Atari with Discrete World Models
FQF
75.7
Fully Parameterized Quantile Function for Distributional Reinforcement Learning
Ape-X
73.8
Distributed Prioritized Experience Replay
Advantage Learning
69.31
Increasing the Action Gap: New Operators for Reinforcement Learning
Bootstrapped DQN
66.6
Deep Exploration via Bootstrapped DQN
ASL DDQN
65.8
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
Duel noop
65.3
Dueling Network Architectures for Deep Reinforcement Learning
DDQN (tuned) noop
65.1
Dueling Network Architectures for Deep Reinforcement Learning
DNA
64.8
DNA: Proximal Policy Optimization with a Dual Network Architecture
DDQN+Pop-Art noop
64.3
Learning values across many orders of magnitude
-
NoisyNet-Dueling
64
Noisy Networks for Exploration
DQN noop
63.9
Deep Reinforcement Learning with Double Q-learning
Prior noop
62.6
Prioritized Experience Replay
0 of 42 row(s) selected.
Previous
Next
Atari Games On Atari 2600 Robotank | SOTA | HyperAI超神经