HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

baller2vec++: A Look-Ahead Multi-Entity Transformer For Modeling Coordinated Agents

Michael A. Alcorn; Anh Nguyen

baller2vec++: A Look-Ahead Multi-Entity Transformer For Modeling Coordinated Agents

Abstract

In many multi-agent spatiotemporal systems, agents operate under the influence of shared, unobserved variables (e.g., the play a team is executing in a game of basketball). As a result, the trajectories of the agents are often statistically dependent at any given time step; however, almost universally, multi-agent models implicitly assume the agents' trajectories are statistically independent at each time step. In this paper, we introduce baller2vec++, a multi-entity Transformer that can effectively model coordinated agents. Specifically, baller2vec++ applies a specially designed self-attention mask to a mixture of location and "look-ahead" trajectory sequences to learn the distributions of statistically dependent agent trajectories. We show that, unlike baller2vec (baller2vec++'s predecessor), baller2vec++ can learn to emulate the behavior of perfectly coordinated agents in a simulated toy dataset. Additionally, when modeling the trajectories of professional basketball players, baller2vec++ outperforms baller2vec by a wide margin.

Code Repositories

airalcorn2/baller2vecplusplus
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
trajectory-modeling-on-nba-sportvuballer2vec++
1x1 NLL: 0.472

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
baller2vec++: A Look-Ahead Multi-Entity Transformer For Modeling Coordinated Agents | Papers | HyperAI