HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance

Liting Lin Heng Fan Zhipeng Zhang Yaowei Wang Yong Xu Haibin Ling

Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance

Abstract

Motivated by the Parameter-Efficient Fine-Tuning (PEFT) in large language models, we propose LoRAT, a method that unveils the power of large ViT model for tracking within laboratory-level resources. The essence of our work lies in adapting LoRA, a technique that fine-tunes a small subset of model parameters without adding inference latency, to the domain of visual tracking. However, unique challenges and potential domain gaps make this transfer not as easy as the first intuition. Firstly, a transformer-based tracker constructs unshared position embedding for template and search image. This poses a challenge for the transfer of LoRA, usually requiring consistency in the design when applied to the pre-trained backbone, to downstream tasks. Secondly, the inductive bias inherent in convolutional heads diminishes the effectiveness of parameter-efficient fine-tuning in tracking models. To overcome these limitations, we first decouple the position embeddings in transformer-based trackers into shared spatial ones and independent type ones. The shared embeddings, which describe the absolute coordinates of multi-resolution images (namely, the template and search images), are inherited from the pre-trained backbones. In contrast, the independent embeddings indicate the sources of each token and are learned from scratch. Furthermore, we design an anchor-free head solely based on MLP to adapt PETR, enabling better performance with less computational overhead. With our design, 1) it becomes practical to train trackers with the ViT-g backbone on GPUs with only memory of 25.8GB (batch size of 16); 2) we reduce the training time of the L-224 variant from 35.0 to 10.8 GPU hours; 3) we improve the LaSOT SUC score from 0.703 to 0.742 with the L-224 variant; 4) we fast the inference speed of the L-224 variant from 52 to 119 FPS. Code and models are available at https://github.com/LitingLin/LoRAT.

Code Repositories

litinglin/lorat
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
visual-object-tracking-on-got-10kLoRAT-L-378
Average Overlap: 77.5
Success Rate 0.5: 86.2
Success Rate 0.75: 78.1
visual-object-tracking-on-got-10kLoRAT-g-378
Average Overlap: 78.9
Success Rate 0.5: 87.8
Success Rate 0.75: 80.7
visual-object-tracking-on-lasotLoRAT-L-378
AUC: 75.1
Normalized Precision: 84.1
Precision: 82.0
visual-object-tracking-on-lasotLoRAT-g-378
AUC: 76.2
Normalized Precision: 85.3
Precision: 83.5
visual-object-tracking-on-lasot-extLoRAT-g-378
AUC: 56.5
Normalized Precision: 69.0
Precision: 64.9
visual-object-tracking-on-lasot-extLoRAT-L-378
AUC: 56.6
Normalized Precision: 69.0
Precision: 65.1
visual-object-tracking-on-needforspeedLoRAT-L-378
AUC: 0.667
visual-object-tracking-on-needforspeedLoRAT-g-378
AUC: 0.681
visual-object-tracking-on-tnl2kLoRAT-g-378
AUC: 62.7
precision: 67.8
visual-object-tracking-on-tnl2kLoRAT-L-378
AUC: 62.3
precision: 67.0
visual-object-tracking-on-trackingnetLoRAT-g-378
Accuracy: 86.0
Normalized Precision: 90.2
Precision: 86.1
visual-object-tracking-on-trackingnetLoRAT-L-378
Accuracy: 85.6
Normalized Precision: 89.7
Precision: 85.4
visual-object-tracking-on-uav123LoRAT-L-378
AUC: 0.725
visual-object-tracking-on-uav123LoRAT-g-378
AUC: 0.739

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | Papers | HyperAI