HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
全站搜索…
⌘
K
首页
SOTA
Atari 游戏
Atari Games On Atari 2600 Bowling
Atari Games On Atari 2600 Bowling
评估指标
Score
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Score
Paper Title
Repository
MuZero
260.13
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Go-Explore
260
First return, then explore
Agent57
251.18
Agent57: Outperforming the Atari Human Benchmark
R2D2
219.5
Recurrent Experience Replay in Distributed Reinforcement Learning
-
GDI-H3
205.2
Generalized Data Distribution Iteration
-
GDI-I3
201.9
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
-
GDI-I3
201.9
Generalized Data Distribution Iteration
-
DNA
181
DNA: Proximal Policy Optimization with a Dual Network Architecture
RUDDER
179
RUDDER: Return Decomposition for Delayed Rewards
MuZero (Res2 Adam)
131.65
Online and Offline Reinforcement Learning by Planning with a Learned Model
FQF
102.3
Fully Parameterized Quantile Function for Distributional Reinforcement Learning
DDQN+Pop-Art noop
102.1
Learning values across many orders of magnitude
-
IQN
86.5
Implicit Quantile Networks for Distributional Reinforcement Learning
CGP
85.8
Evolving simple programs for playing Atari games
C51 noop
81.8
A Distributional Perspective on Reinforcement Learning
Reactor 500M
81.0
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
-
QR-DQN-1
77.2
Distributional Reinforcement Learning with Quantile Regression
Persistent AL
71.59
Increasing the Action Gap: New Operators for Reinforcement Learning
DDQN (tuned) hs
69.6
Deep Reinforcement Learning with Double Q-learning
DDQN (tuned) noop
68.1
Dueling Network Architectures for Deep Reinforcement Learning
0 of 44 row(s) selected.
Previous
Next
Atari Games On Atari 2600 Bowling | SOTA | HyperAI超神经