HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Scalable Gradients for Stochastic Differential Equations

Xuechen Li Ting-Kam Leonard Wong Ricky T. Q. Chen David Duvenaud

Scalable Gradients for Stochastic Differential Equations

Abstract

The adjoint sensitivity method scalably computes gradients of solutions to ordinary differential equations. We generalize this method to stochastic differential equations, allowing time-efficient and constant-memory computation of gradients with high-order adaptive solvers. Specifically, we derive a stochastic differential equation whose solution is the gradient, a memory-efficient algorithm for caching noise, and conditions under which numerical solutions converge. In addition, we combine our method with gradient-based stochastic variational inference for latent stochastic differential equations. We use our method to fit stochastic dynamics defined by neural networks, achieving competitive performance on a 50-dimensional motion capture dataset.

Code Repositories

xwinxu/bayesde
jax
Mentioned in GitHub
JFagin/latent_SDE
pytorch
Mentioned in GitHub
xwinxu/bayesian-sde
jax
Mentioned in GitHub
google-research/torchsde
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-prediction-on-cmu-mocap-2Latent ODE
Test Error: 5.98
video-prediction-on-cmu-mocap-2Latent SDE
Test Error: 4.03

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp