HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

StarCraft II: A New Challenge for Reinforcement Learning

Oriol Vinyals; Timo Ewalds; Sergey Bartunov; Petko Georgiev; Alexander Sasha Vezhnevets; Michelle Yeo; Alireza Makhzani; Heinrich Küttler; John Agapiou; Julian Schrittwieser; John Quan; Stephen Gaffney; Stig Petersen; Karen Simonyan; Tom Schaul; Hado van Hasselt; David Silver; Timothy Lillicrap; Kevin Calderone; Paul Keet; Anthony Brunasso; David Lawrence; Anders Ekermo; Jacob Repp; Rodney Tsing

StarCraft II: A New Challenge for Reinforcement Learning

Abstract

This paper introduces SC2LE (StarCraft II Learning Environment), a reinforcement learning environment based on the StarCraft II game. This domain poses a new grand challenge for reinforcement learning, representing a more difficult class of problems than considered in most prior work. It is a multi-agent problem with multiple players interacting; there is imperfect information due to a partially observed map; it has a large action space involving the selection and control of hundreds of units; it has a large state space that must be observed solely from raw input feature planes; and it has delayed credit assignment requiring long-term strategies over thousands of steps. We describe the observation, action, and reward specification for the StarCraft II domain and provide an open source Python-based interface for communicating with the game engine. In addition to the main game maps, we provide a suite of mini-games focusing on different elements of StarCraft II gameplay. For the main game maps, we also provide an accompanying dataset of game replay data from human expert players. We give initial baseline results for neural networks trained from this data to predict game outcomes and player actions. Finally, we present initial baseline results for canonical deep reinforcement learning agents applied to the StarCraft II domain. On the mini-games, these agents learn to achieve a level of play that is comparable to a novice player. However, when trained on the main game, these agents are unable to make significant progress. Thus, SC2LE offers a new and challenging environment for exploring deep reinforcement learning algorithms and architectures.

Code Repositories

nicoladainese96/SC2-RL
pytorch
Mentioned in GitHub
deepmind/pysc2
Official
Mentioned in GitHub
Teslatic/SC2-Freiburg
Mentioned in GitHub
ericborn/binarybot
tf
Mentioned in GitHub
google-deepmind/pysc2
Mentioned in GitHub
tuomaso/SC2LE-implementation
tf
Mentioned in GitHub
raccoon831012/StartCraft2-RL
tf
Mentioned in GitHub
4rChon/NL-FuN
tf
Mentioned in GitHub
inoryy/reaver
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
starcraft-ii-on-collectmineralshardsFullyConv LSTM
Max Score: 137
starcraft-ii-on-movetobeaconFullyConv LSTM
Max Score: 35

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
StarCraft II: A New Challenge for Reinforcement Learning | Papers | HyperAI