Command Palette
Search for a command to run...
Deformable DETR: Deformable Transformers for End-to-End Object Detection
Xizhou Zhu Weijie Su Lewei Lu Bin Li Xiaogang Wang Jifeng Dai

Abstract
DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https://github.com/fundamentalvision/Deformable-DETR.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 2d-object-detection-on-sardet-100k | Deformable DETR | box mAP: 50.0 |
| object-detection-on-coco | Deformable DETR (ResNeXt-101+DCN) | AP50: 71.9 AP75: 58.1 APL: 65.6 APM: 54.4 APS: 34.4 Hardware Burden: 17G Operations per network pass: 17.3G box mAP: 52.3 |
| object-detection-on-coco-o | Deformable-DETR (ResNet-50) | Average mAP: 18.5 Effective Robustness: -1.49 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.