HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Rigging the Lottery: Making All Tickets Winners

Utku Evci Trevor Gale Jacob Menick Pablo Samuel Castro Erich Elsen

Rigging the Lottery: Making All Tickets Winners

Abstract

Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-to-sparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static. Code used in our work can be found in github.com/google-research/rigl.

Code Repositories

google-research/rigl
Official
tf
Mentioned in GitHub
nollied/rigl-torch
pytorch
Mentioned in GitHub
stevenboys/moon
pytorch
Mentioned in GitHub
vita-group/granet
pytorch
Mentioned in GitHub
verbose-avocado/rigl-torch
pytorch
Mentioned in GitHub
varun19299/rigl-reproducibility
Official
pytorch
Mentioned in GitHub
calgaryml/condensed-sparsity
pytorch
Mentioned in GitHub
hyeon95y/sparselinear
pytorch
Mentioned in GitHub
stevenboys/agent
pytorch
Mentioned in GitHub
Shiweiliuiiiiiii/GraNet
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
sparse-learning-on-imagenetMobileNet-v1: 75% Sparse
Top-1 Accuracy: 71.9
sparse-learning-on-imagenetResnet-50: 80% Sparse
Top-1 Accuracy: 77.1
sparse-learning-on-imagenetMobileNet-v1: 90% Sparse
Top-1 Accuracy: 68.1
sparse-learning-on-imagenetResnet-50: 90% Sparse
Top-1 Accuracy: 76.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Rigging the Lottery: Making All Tickets Winners | Papers | HyperAI