HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Sparse Networks from Scratch: Faster Training without Losing Performance

Tim Dettmers; Luke Zettlemoyer

Sparse Networks from Scratch: Faster Training without Losing Performance

Abstract

We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Additionally, we find that sparse momentum is insensitive to the choice of its hyperparameters suggesting that sparse momentum is robust and easy to use.

Code Repositories

google-research/rigl
tf
Mentioned in GitHub
TimDettmers/sparse_learning
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-cifar-10WRN-22-8 (Sparse Momentum)
Percentage correct: 95.04
image-classification-on-mnistLeNet 300-100 (Sparse Momentum)
Percentage error: 1.26

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Sparse Networks from Scratch: Faster Training without Losing Performance | Papers | HyperAI