Command Palette
Search for a command to run...
Hai Victor Habi Roy H. Jennings Arnon Netzer

Abstract
Recent work in network quantization produced state-of-the-art results using mixed precision quantization. An imperative requirement for many efficient edge device hardware implementations is that their quantizers are uniform and with power-of-two thresholds. In this work, we introduce the Hardware Friendly Mixed Precision Quantization Block (HMQ) in order to meet this requirement. The HMQ is a mixed precision quantization block that repurposes the Gumbel-Softmax estimator into a smooth estimator of a pair of quantization parameters, namely, bit-width and threshold. HMQs use this to search over a finite space of quantization schemes. Empirically, we apply HMQs to quantize classification models trained on CIFAR10 and ImageNet. For ImageNet, we quantize four different architectures and show that, in spite of the added restrictions to our quantization scheme, we achieve competitive and, in some cases, state-of-the-art results.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| quantization-on-imagenet | EfficientNet-B0-W4A4 | Activation bits: 4 Top-1 Accuracy (%): 76 Weight bits: 4 |
| quantization-on-imagenet | ResNet50-W3A4 | Activation bits: 4 Top-1 Accuracy (%): 75.45 Weight bits: 3 |
| quantization-on-imagenet | MobileNetV2 | Top-1 Accuracy (%): 70.9 |
| quantization-on-imagenet | EfficientNet-B0-W8A8 | Activation bits: 8 Top-1 Accuracy (%): 76.4 Weight bits: 8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.