Command Palette
Search for a command to run...
Bum Jun Kim Hyeyeon Choi Hyeonah Jang Sang Woo Kim

Abstract
Recently, various normalization layers have been proposed to stabilize the training of deep neural networks. Among them, group normalization is a generalization of layer normalization and instance normalization by allowing a degree of freedom in the number of groups it uses. However, to determine the optimal number of groups, trial-and-error-based hyperparameter tuning is required, and such experiments are time-consuming. In this study, we discuss a reasonable method for setting the number of groups. First, we find that the number of groups influences the gradient behavior of the group normalization layer. Based on this observation, we derive the ideal number of groups, which calibrates the gradient scale to facilitate gradient descent optimization. Our proposed number of groups is theoretically grounded, architecture-aware, and can provide a proper value in a layer-wise manner for all layers. The proposed method exhibited improved performance over existing methods in numerous neural network architectures, tasks, and datasets.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| fine-grained-image-classification-on-caltech | ResNet-101 (ideal number of groups) | Top-1 Error Rate: 22.247% |
| fine-grained-image-classification-on-oxford-2 | ResNet-101 (ideal number of groups) | Accuracy: 77.076 |
| image-classification-on-mnist | MLP (ideal number of groups) | Percentage error: 1.67 |
| object-detection-on-coco-2017 | Faster R-CNN (ideal number of groups) | AP: 40.7 AP50: 61.2 AP75: 44.6 |
| panoptic-segmentation-on-coco-panoptic | PFPN (ideal number of groups) | PQ: 42.147 PQst: 30.572 PQth: 49.816 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.