HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs

Stepan Tytarenko Mohammad Ruhul Amin

Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs

Abstract

Fine-tuning large pre-trained language models (LLMs) on particular datasets is a commonly employed strategy in Natural Language Processing (NLP) classification tasks. However, this approach usually results in a loss of models generalizability. In this paper, we present a framework that allows for maintaining generalizability, and enhances the performance on the downstream task by utilizing task-specific context attribution. We show that a linear transformation of the text representation from any transformer model using the task-specific concept operator results in a projection onto the latent concept space, referred to as context attribution in this paper. The specific concept operator is optimized during the supervised learning stage via novel loss functions. The proposed framework demonstrates that context attribution of the text representation for each task objective can improve the capacity of the discriminator function and thus achieve better performance for the classification task. Experimental results on three datasets, namely HateXplain, IMDB reviews, and Social Media Attributions, illustrate that the proposed model attains superior accuracy and generalizability. Specifically, for the non-fine-tuned BERT on the HateXplain dataset, we observe 8% improvement in accuracy and 10% improvement in F1-score. Whereas for the IMDB dataset, fine-tuned state-of-the-art XLNet is outperformed by 1% for both accuracy and F1-score. Furthermore, in an out-of-domain cross-dataset test, DistilBERT fine-tuned on the IMDB dataset in conjunction with the proposed model improves the F1-score on the HateXplain dataset by 7%. For the Social Media Attributions dataset of YouTube comments, we observe 5.2% increase in F1-metric. The proposed framework is implemented with PyTorch and provided open-source on GitHub.

Code Repositories

stepantita/space-model
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
sentiment-analysis-on-imdbSpace-XLNet
Accuracy: 94.88
sentiment-analysis-on-imdb-movie-reviews-1Space-DistilBERT
Accuracy (2 classes): 0.8322
F1 Macro: 0.8320
sentiment-analysis-on-imdb-movie-reviews-1Space-XLNet
Accuracy (2 classes): 0.9488
F1 Macro: 0.9487
text-classification-on-hatexplain-1XLNet
Accuracy (2 classes): 0.8160
F1 Macro: 0.8156
text-classification-on-hatexplain-1Space-XLNet
Accuracy (2 classes): 0.8798
F1 Macro: 0.8797
text-classification-on-hatexplain-1BERT-base
Accuracy (2 classes): 0.6588
F1 Macro: 0.6555
text-classification-on-hatexplain-1Space-BERT
Accuracy (2 classes): 0.8110
F1 Macro: 0.8108
text-classification-on-imdb-movie-reviews-1XLNet
Accuracy (2 classes): 0.9387
text-classification-on-imdb-movie-reviews-1Space-XLNet
F1 Macro: 0.9487
text-classification-on-social-mediaBERT-base
Accuracy (2 classes): 0.8220
F1 Macro: 0.7484
text-classification-on-social-mediaSpace-BERT
Accuracy (2 classes): 0.8309
F1 Macro: 0.8006

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | Papers | HyperAI