HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Feather: An Elegant Solution to Effective DNN Sparsification

Athanasios Glentis Georgoulakis George Retsinas Petros Maragos

Feather: An Elegant Solution to Effective DNN Sparsification

Abstract

Neural Network pruning is an increasingly popular way for producing compact and efficient models, suitable for resource-limited environments, while preserving high performance. While the pruning can be performed using a multi-cycle training and fine-tuning process, the recent trend is to encompass the sparsification process during the standard course of training. To this end, we introduce Feather, an efficient sparse training module utilizing the powerful Straight-Through Estimator as its core, coupled with a new thresholding operator and a gradient scaling technique, enabling robust, out-of-the-box sparsification performance. Feather's effectiveness and adaptability is demonstrated using various architectures on the CIFAR dataset, while on ImageNet it achieves state-of-the-art Top-1 validation accuracy using the ResNet-50 architecture, surpassing existing methods, including more complex and computationally heavy ones, by a considerable margin. Code is publicly available at https://github.com/athglentis/feather .

Code Repositories

athglentis/feather
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
network-pruning-on-imagenet-resnet-50-90Feather
Top-1 Accuracy: 76.93

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp