Command Palette
Search for a command to run...
Hai Victor Habi Reuven Peretz Elad Cohen Lior Dikstein Oranit Dror Idit Diamant Roy H. Jennings Arnon Netzer

Abstract
Neural network quantization enables the deployment of models on edge devices. An essential requirement for their hardware efficiency is that the quantizers are hardware-friendly: uniform, symmetric, and with power-of-two thresholds. To the best of our knowledge, current post-training quantization methods do not support all of these constraints simultaneously. In this work, we introduce a hardware-friendly post training quantization (HPTQ) framework, which addresses this problem by synergistically combining several known quantization methods. We perform a large-scale study on four tasks: classification, object detection, semantic segmentation and pose estimation over a wide variety of network architectures. Our extensive experiments show that competitive results can be obtained under hardware-friendly constraints.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| quantization-on-coco | SSD ResNet50 V1 FPN 640x640 | MAP: 34.3 |
| quantization-on-imagenet | DenseNet-121 W8A8 | Activation bits: 8 Top-1 Accuracy (%): 73.356 Weight bits: 8 |
| quantization-on-imagenet | MobileNetV2 W8A8 | Activation bits: 8 Top-1 Accuracy (%): 71.46 Weight bits: 8 |
| quantization-on-imagenet | EfficientNet-B0 W8A8 | Activation bits: 8 Top-1 Accuracy (%): 74.216 Weight bits: 8 |
| quantization-on-imagenet | EfficientNet-B0 ReLU W8A8 | Activation bits: 8 Top-1 Accuracy (%): 77.092 Weight bits: 8 |
| quantization-on-imagenet | Xception W8A8 | Activation bits: 8 Top-1 Accuracy (%): 78.972 Weight bits: 8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.