Command Palette
Search for a command to run...
Hao Yuan Bai Xue Liu

Abstract
Spatiotemporal data is ubiquitous, and forecasting it has important applications in many domains. However, its complex cross-component dependencies and non-linear temporal dynamics can be challenging for traditional techniques. Existing methods address this by learning the two dimensions separately. Here, we introduce Temporal Graphormer (T-Graphormer), a Transformer-based approach capable of modelling spatiotemporal correlations simultaneously. By adding temporal encodings in the Graphormer architecture, each node attends to all other tokens within the graph sequence, enabling the model to learn rich spacetime patterns with minimal predefined inductive biases. We show the effectiveness of T-Graphormer on real-world traffic prediction benchmark datasets. Compared to state-of-the-art methods, T-Graphormer reduces root mean squared error (RMSE) and mean absolute percentage error (MAPE) by up to 20% and 10%.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| traffic-prediction-on-metr-la | T-Graphormer | 12 steps MAE: 3.19 12 steps MAPE: 8.62 12 steps RMSE: 6.12 MAE @ 12 step: 3.19 MAE @ 3 step: 2.63 |
| traffic-prediction-on-pems-bay | T-Graphormer | MAE @ 12 step: 1.63 RMSE: 3.20 RMSE : 3.20 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.