HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
全站搜索…
⌘
K
首页
SOTA
Atari 游戏
Atari Games On Atari 2600 Montezumas Revenge
Atari Games On Atari 2600 Montezumas Revenge
评估指标
Score
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Score
Paper Title
Repository
Go-Explore
43791
First return, then explore
Go-Explore
43763
Go-Explore: a New Approach for Hard-Exploration Problems
SND-V
21565
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
Agent57
9352.01
Agent57: Outperforming the Atari Human Benchmark
RND
8152
Exploration by Random Network Distillation
SND-VIC
7838
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
SND-STD
7212
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
A2C+CoEX
6635
Contingency-Aware Exploration in Reinforcement Learning
-
DQN-PixelCNN
3705.5
Count-Based Exploration with Neural Density Models
DDQN-PC
3459
Unifying Count-Based Exploration and Intrinsic Motivation
GDI-I3
3000
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
-
GDI-I3
3000
Generalized Data Distribution Iteration
-
Sarsa-φ-EB
2745.4
Count-Based Exploration in Feature Space for Reinforcement Learning
Intrinsic Reward Agent
2504.6
Large-Scale Study of Curiosity-Driven Learning
GDI-H3
2500
Generalized Data Distribution Iteration
-
Ape-X
2500.0
Distributed Prioritized Experience Replay
MuZero (Res2 Adam)
2500
Online and Offline Reinforcement Learning by Planning with a Learned Model
R2D2
2061.3
Recurrent Experience Replay in Distributed Reinforcement Learning
-
DQN+SR
1778.8
Count-Based Exploration with the Successor Representation
DQNMMCe+SR
1778.6
Count-Based Exploration with the Successor Representation
0 of 50 row(s) selected.
Previous
Next
Atari Games On Atari 2600 Montezumas Revenge | SOTA | HyperAI超神经