HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Learned Step Size Quantization

Steven K. Esser; Jeffrey L. McKinstry; Deepika Bablani; Rathinakumar Appuswamy; Dharmendra S. Modha

Learned Step Size Quantization

Abstract

Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured. Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.

Code Repositories

jiyoonkm/columnquant
pytorch
Mentioned in GitHub
ZouJiu1/LSQplus
pytorch
Mentioned in GitHub
Adlik/model_optimizer
pytorch
Mentioned in GitHub
zhutmost/lsq-net
pytorch
Mentioned in GitHub
Kelvinyu1117/LSQ-implementation
pytorch
Mentioned in GitHub
Shunli-Wang/Tiny-YOLO-LSQ
pytorch
Mentioned in GitHub
DeadAt0m/LSQ-PyTorch
pytorch
Mentioned in GitHub
DeadAt0m/LSQFakeQuantize-PyTorch
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
model-compression-on-imagenetADLIK-MO-ResNet50+W3A4
Top-1: 77.34
model-compression-on-imagenetADLIK-MO-ResNet50+W4A4
Top-1: 77.878
quantization-on-imagenetResNet50-W4A4 (paper)
Activation bits: 4
Top-1 Accuracy (%): 76.7
Weight bits: 4
quantization-on-imagenetADLIK-MO-ResNet50-W4A4
Activation bits: 4
Top-1 Accuracy (%): 77.878
Weight bits: 4
quantization-on-imagenetADLIK-MO-ResNet50-W3A4
Activation bits: 4
Top-1 Accuracy (%): 77.34
Weight bits: 3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Learned Step Size Quantization | Papers | HyperAI