Command Palette
Search for a command to run...
Kashyap Chitta; Aditya Prakash; Andreas Geiger

Abstract
Efficient reasoning about the semantic, spatial, and temporal structure of a scene is a crucial prerequisite for autonomous driving. We present NEural ATtention fields (NEAT), a novel representation that enables such reasoning for end-to-end imitation learning models. NEAT is a continuous function which maps locations in Bird's Eye View (BEV) scene coordinates to waypoints and semantics, using intermediate attention maps to iteratively compress high-dimensional 2D image features into a compact representation. This allows our model to selectively attend to relevant regions in the input while ignoring information irrelevant to the driving task, effectively associating the images with the BEV representation. In a new evaluation setting involving adverse environmental conditions and challenging scenarios, NEAT outperforms several strong baselines and achieves driving scores on par with the privileged CARLA expert used to generate its training data. Furthermore, visualizing the attention maps for models with NEAT intermediate representations provides improved interpretability.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| autonomous-driving-on-carla-leaderboard | NEAT | Driving Score: 21.83 Infraction penalty: 0.65 Route Completion: 41.71 |
| carla-longest6-on-carla | Neural Attention Fields (NEAT) | Driving Score: 24 Infraction Score: 0.71 Route Completion: 62 |
| novel-view-synthesis-on-x3d | NeAT | PSNR: 36.01 SSIM: 0.9638 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.