HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

R2 Loss: Range Restriction Loss for Model Compression and Quantization

Arnav Kundu Chungkuk Yoo Srijan Mishra Minsik Cho Saurabh Adya

R2 Loss: Range Restriction Loss for Model Compression and Quantization

Abstract

Model quantization and compression is widely used techniques to reduce usage of computing resource at inference time. While state-of-the-art works have been achieved reasonable accuracy with higher bit such as 4bit or 8bit, but still it is challenging to quantize/compress a model further, e.g., 1bit or 2bit. To overcome the challenge, we focus on outliers in weights of a pre-trained model which disrupt effective lower bit quantization and compression. In this work, we propose Range Restriction Loss (R2-Loss) for building lower bit quantization and compression friendly models by removing outliers from weights during pre-training. By effectively restricting range of weights, we mold the overall distribution into a tight shape to ensure high quantization bit resolution, therefore allowing model compression and quantization techniques can to utilize their limited numeric representation powers better. We introduce three different, L-inf R2-Loss, its extension Margin R2-Loss and a new Soft-Min-MaxR2-Loss to be used as an auxiliary loss during full-precision model training. These R2-Loss can be used in different cases such as L-inf and Margin R2-Loss would be effective for symmetric quantization, while Soft-Min-Max R2-Loss shows better performance for model compression. In our experiment, R2-Loss improves lower bit quantization accuracy with state-of-the-art post-training quantization (PTQ), quantization-aware training (QAT), and model compression techniques. With R2-Loss, MobileNet-V2 2bit weight and 8bit activation PTQ, MobileNet-V1 2bit weight and activation QAT, ResNet18 1bit weight compression are improved to 59.49% from 50.66%, 59.05% from 55.96%, and 52.58% from 45.54%, respectively.

Benchmarks

BenchmarkMethodologyMetrics
model-compression-on-imagenetMobileNet-v1 + 2bit-2dim model compression using DKM
Top-1: 53.99
model-compression-on-imagenetResNet-18 + 4bit-1dim model compression using DKM
Top-1: 70.52
model-compression-on-imagenetResNet-18 + 2bit-1dim model compression using DKM
Top-1: 68.63
model-compression-on-imagenetMobileNet-v1 + 2bit-1dim model compression using DKM
Top-1: 67.62
model-compression-on-imagenetMobileNet-v1 + 1bit-1dim model compression using DKM
Top-1: 52.58
model-compression-on-imagenetResNet-18 + 4bit-4dim model compression using DKM
Top-1: 66.1
model-compression-on-imagenetMobileNet-v1 + 4bit-4dim model compression using DKM
Top-1: 61.4
model-compression-on-imagenetResNet-18 + 2bit-2dim model compression using DKM
Top-1: 64.7
model-compression-on-imagenetResNet-18 + 1bit-1dim model compression using DKM
Top-1: 59.7
model-compression-on-imagenetMobileNet-v1 + 4bit-1dim model compression using DKM
Top-1: 69.63
model-compression-on-qnliMobileBERT + 2bit-1dim model compression using DKM
Accuracy: 82.13
model-compression-on-qnliMobileBERT + 1bit-1dim model compression using DKM
Accuracy: 63.17
quantization-on-imagenetResNet-18 + PACT + R2Loss
Activation bits: 4
Top-1 Accuracy (%): 68.45
Weight bits: 2
quantization-on-imagenetMobileNet-v1 + EWGS + R2Loss
Top-1 Accuracy (%): 69.79
Weight bits: 4
quantization-on-imagenetMobileNet-v1 + LSQ + R2Loss
Top-1 Accuracy (%): 69.64

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
R2 Loss: Range Restriction Loss for Model Compression and Quantization | Papers | HyperAI