Command Palette
Search for a command to run...
V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction
Tsun-Hsuan Wang Sivabalan Manivasagam Ming Liang Bin Yang Wenyuan Zeng James Tu Raquel Urtasun

Abstract
In this paper, we explore the use of vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles. By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints. This allows us to see through occlusions and detect actors at long range, where the observations are very sparse or non-existent. We also show that our approach of sending compressed deep feature map activations achieves high accuracy while satisfying communication bandwidth requirements.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-object-detection-on-opv2v | V2VNet (PointPillar backbone) | AP@0.7@CulverCity: 0.734 AP@0.7@Default: 0.822 |
| 3d-object-detection-on-v2x-sim | V2VNet | mAOE: 0.349 mAP: 21.4 mASE: 0.255 mATE: 0.768 |
| 3d-object-detection-on-v2xset | V2VNet | AP0.5 (Noisy): 0.791 AP0.5 (Perfect): 0.845 AP0.7 (Noisy): 0.493 AP0.7 (Perfect): 0.677 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.