HyperAIHyperAI

Command Palette

Search for a command to run...

13 days ago

LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised Learning in Open-World Scenarios

Jiahao Chen Zhiyuan Huang Yurou Liu Bing Su

LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised
  Learning in Open-World Scenarios

Abstract

Long-tailed learning has garnered increasing attention due to its wideapplicability in real-world scenarios. Among existing approaches, Long-TailedSemi-Supervised Learning (LTSSL) has emerged as an effective solution byincorporating a large amount of unlabeled data into the imbalanced labeleddataset. However, most prior LTSSL methods are designed to train models fromscratch, which often leads to issues such as overconfidence and low-qualitypseudo-labels. To address these challenges, we extend LTSSL into the foundationmodel fine-tuning paradigm and propose a novel framework: LoFT (Long-tailedsemi-supervised learning via parameter-efficient Fine-Tuning). We demonstratethat fine-tuned foundation models can generate more reliable pseudolabels,thereby benefiting imbalanced learning. Furthermore, we explore a morepractical setting by investigating semi-supervised learning under open-worldconditions, where the unlabeled data may include out-of-distribution (OOD)samples. To handle this problem, we propose LoFT-OW (LoFT under Open-Worldscenarios) to improve the discriminative ability. Experimental results onmultiple benchmarks demonstrate that our method achieves superior performancecompared to previous approaches, even when utilizing only 1\% of the unlabeleddata compared with previous works.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised Learning in Open-World Scenarios | Papers | HyperAI