HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Deep Equilibrium Models

Shaojie Bai; J. Zico Kolter; Vladlen Koltun

Deep Equilibrium Models

Abstract

We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective "depth" of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github.com/locuslab/deq .

Code Repositories

locuslab/impsq
pytorch
Mentioned in GitHub
cgoemaere/hamdeq
pytorch
Mentioned in GitHub
martaskrt/qdeq
pytorch
Mentioned in GitHub
prolearner/hypertorch
pytorch
Mentioned in GitHub
reacho/deep-equilibrium-vs-bilevel
pytorch
Mentioned in GitHub
lufanma/ifr
pytorch
Mentioned in GitHub
locuslab/deq
Official
pytorch
Mentioned in GitHub
cgoemaere/hopdeq
pytorch
Mentioned in GitHub
locuslab/monotone_op_net
pytorch
Mentioned in GitHub
SinclairHudson/DeepEquilibrium
pytorch
Mentioned in GitHub
sciml/fastdeq.jl
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-penn-treebank-wordDEQ-TrellisNet
Params: 24M
Test perplexity: 57.1
language-modelling-on-wikitext-103DEQ-TrellisNet
Number of params: 180M
Test perplexity: 29.0
language-modelling-on-wikitext-103DEQ-Transformer (medium, adaptive embed)
Number of params: 110M
Test perplexity: 23.2
language-modelling-on-wikitext-103DEQ-Transformer (small)
Number of params: 138M
Test perplexity: 32.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Deep Equilibrium Models | Papers | HyperAI