Command Palette
Search for a command to run...
Scott Reed; Konrad Zolna; Emilio Parisotto; Sergio Gomez Colmenarejo; Alexander Novikov; Gabriel Barth-Maron; Mai Gimenez; Yury Sulsky; Jackie Kay; Jost Tobias Springenberg; Tom Eccles; Jake Bruce; Ali Razavi; Ashley Edwards; Nicolas Heess; Yutian Chen; Raia Hadsell; Oriol Vinyals; Mahyar Bordbar; Nando de Freitas

Abstract
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| skill-generalization-on-rgb-stacking | Gato | Average: 50.2 Group 1: 24.5 Group 2: 33 Group 3: 50.5 Group 4: 76.5 Group 5: 66.5 |
| skill-mastery-on-rgb-stacking | Gato | Average: 75.6 Group 1: 58 Group 2: 57.6 Group 3: 78.5 Group 4: 89 Group 5: 95.1 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.