HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
全站搜索…
⌘
K
首页
SOTA
Atari 游戏
Atari Games On Atari 2600 Seaquest
Atari Games On Atari 2600 Seaquest
评估指标
Score
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Score
Paper Title
Repository
GDI-H3
1000000
Generalized Data Distribution Iteration
-
GDI-H3(200M frames)
1000000
Generalized Data Distribution Iteration
-
Agent57
999997.63
Agent57: Outperforming the Atari Human Benchmark
R2D2
999996.7
Recurrent Experience Replay in Distributed Reinforcement Learning
-
MuZero
999976.52
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
MuZero (Res2 Adam)
999659.18
Online and Offline Reinforcement Learning by Planning with a Learned Model
GDI-I3
943910
Generalized Data Distribution Iteration
-
GDI-I3
943910
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
-
Ape-X
392952.3
Distributed Prioritized Experience Replay
C51 noop
266434.0
A Distributional Perspective on Reinforcement Learning
Duel noop
50254.2
Dueling Network Architectures for Deep Reinforcement Learning
Duel hs
37361.6
Dueling Network Architectures for Deep Reinforcement Learning
IQN
30140
Implicit Quantile Networks for Distributional Reinforcement Learning
ASL DDQN
29278.6
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
Prior noop
26357.8
Prioritized Experience Replay
Prior hs
25463.7
Prioritized Experience Replay
NoisyNet-Dueling
16754
Noisy Networks for Exploration
DDQN (tuned) noop
16452.7
Dueling Network Architectures for Deep Reinforcement Learning
DDQN (tuned) hs
14498.0
Deep Reinforcement Learning with Double Q-learning
Persistent AL
13230.74
Increasing the Action Gap: New Operators for Reinforcement Learning
0 of 57 row(s) selected.
Previous
Next
Atari Games On Atari 2600 Seaquest | SOTA | HyperAI超神经