HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Network Pruning via Transformable Architecture Search

Xuanyi Dong; Yi Yang

Network Pruning via Transformable Architecture Search

Abstract

Network pruning reduces the computation costs of an over-parameterized network without performance damage. Prevailing pruning algorithms pre-define the width and depth of the pruned networks, and then transfer parameters from the unpruned network to pruned networks. To break the structure limitation of the pruned networks, we propose to apply neural architecture search to search directly for a network with flexible channel and layer sizes. The number of the channels/layers is learned by minimizing the loss of the pruned networks. The feature map of the pruned network is an aggregation of K feature map fragments (generated by K networks of different sizes), which are sampled based on the probability distribution.The loss can be back-propagated not only to the network weights, but also to the parameterized distribution to explicitly tune the size of the channels/layers. Specifically, we apply channel-wise interpolation to keep the feature map with different channel sizes aligned in the aggregation procedure. The maximum probability for the size in each distribution serves as the width and depth of the pruned network, whose parameters are learned by knowledge transfer, e.g., knowledge distillation, from the original networks. Experiments on CIFAR-10, CIFAR-100 and ImageNet demonstrate the effectiveness of our new perspective of network pruning compared to traditional network pruning algorithms. Various searching and knowledge transfer approaches are conducted to show the effectiveness of the two components. Code is at: https://github.com/D-X-Y/NAS-Projects.

Code Repositories

xxlya/COS598D_Assignment1
pytorch
Mentioned in GitHub
D-X-Y/AutoDL-Projects
pytorch
Mentioned in GitHub
D-X-Y/GDAS
pytorch
Mentioned in GitHub
D-X-Y/NAS-Projects
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
network-pruning-on-cifar-10TAS-pruned ResNet-110
Accuracy: 94.33
GFLOPs: 0.119
network-pruning-on-cifar-100TAS-pruned ResNet-110
Accuracy: 73.16
GFLOPs: 0.12
network-pruning-on-imagenetTAS-pruned ResNet-50
Accuracy: 76.20
GFLOPs: 2.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Network Pruning via Transformable Architecture Search | Papers | HyperAI