HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting

Hangchen Liu Zheng Dong Renhe Jiang Jiewen Deng Jinliang Deng Quanjun Chen Xuan Song

STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting

Abstract

With the rapid development of the Intelligent Transportation System (ITS), accurate traffic forecasting has emerged as a critical challenge. The key bottleneck lies in capturing the intricate spatio-temporal traffic patterns. In recent years, numerous neural networks with complicated architectures have been proposed to address this issue. However, the advancements in network architectures have encountered diminishing performance gains. In this study, we present a novel component called spatio-temporal adaptive embedding that can yield outstanding results with vanilla transformers. Our proposed Spatio-Temporal Adaptive Embedding transformer (STAEformer) achieves state-of-the-art performance on five real-world traffic forecasting datasets. Further experiments demonstrate that spatio-temporal adaptive embedding plays a crucial role in traffic forecasting by effectively capturing intrinsic spatio-temporal relations and chronological information in traffic time series.

Code Repositories

xdzhelheim/staeformer
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
traffic-prediction-on-metr-laSTAEformer
MAE @ 12 step: 3.34
MAE @ 3 step: 2.65
traffic-prediction-on-pems-baySTAEformer
MAE @ 12 step: 1.91
traffic-prediction-on-pems04STAEformer
12 Steps MAE: 18.22
traffic-prediction-on-pems07STAEformer
MAE@1h: 19.14
traffic-prediction-on-pems08STAEformer
MAE@1h: 13.46
traffic-prediction-on-pemsd7STAEformer
12 steps MAE: 19.14
12 steps MAPE: 8.01
12 steps RMSE: 32.60

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting | Papers | HyperAI