HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions

Mingyu Kim Jihwan Oh Yongsik Lee Joonkee Kim Seonghwan Kim Song Chong Se-Young Yun

The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions

Abstract

In this paper, we propose a novel benchmark called the StarCraft Multi-Agent Challenges+, where agents learn to perform multi-stage tasks and to use environmental factors without precise reward functions. The previous challenges (SMAC) recognized as a standard benchmark of Multi-Agent Reinforcement Learning are mainly concerned with ensuring that all agents cooperatively eliminate approaching adversaries only through fine manipulation with obvious reward functions. This challenge, on the other hand, is interested in the exploration capability of MARL algorithms to efficiently learn implicit multi-stage tasks and environmental factors as well as micro-control. This study covers both offensive and defensive scenarios. In the offensive scenarios, agents must learn to first find opponents and then eliminate them. The defensive scenarios require agents to use topographic features. For example, agents need to position themselves behind protective structures to make it harder for enemies to attack. We investigate MARL algorithms under SMAC+ and observe that recent approaches work well in similar settings to the previous challenges, but misbehave in offensive scenarios. Additionally, we observe that an enhanced exploration approach has a positive effect on performance but is not able to completely solve all scenarios. This study proposes new directions for future research.

Code Repositories

osilab-kaist/smac_exp
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
smac-on-smac-def-armored-parallelIQL
Median Win Rate: 0.0
smac-on-smac-def-armored-sequentialIQL
Median Win Rate: 9.4
smac-on-smac-def-infantry-parallelIQL
Median Win Rate: 40.0
smac-on-smac-def-infantry-sequentialIQL
Median Win Rate: 93.8
smac-on-smac-def-outnumbered-parallelIQL
Median Win Rate: 0.0
smac-on-smac-def-outnumbered-sequentialIQL
Median Win Rate: 0.0
smac-on-smac-off-hard-parallelIQL
Median Win Rate: 0.0
smac-on-smac-off-superhard-parallelIQL
Median Win Rate: 0.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions | Papers | HyperAI