Command Palette
Search for a command to run...
Shihao Zhang Linlin Yang Michael Bi Mi Xiaoxu Zheng Angela Yao

Abstract
In computer vision, it is often observed that formulating regression problems as a classification task often yields better performance. We investigate this curious phenomenon and provide a derivation to show that classification, with the cross-entropy loss, outperforms regression with a mean squared error loss in its ability to learn high-entropy feature representations. Based on the analysis, we propose an ordinal entropy loss to encourage higher-entropy feature spaces while maintaining ordinal relationships to improve the performance of regression tasks. Experiments on synthetic and real-world regression tasks demonstrate the importance and benefits of increasing entropy for regression.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| crowd-counting-on-shanghaitech-a | OrdinalEntropy | MAE: 65.6 MSE: 105.0 |
| crowd-counting-on-shanghaitech-b | OrdinalEntropy | MAE: 9.1 MSE: 14.5 |
| monocular-depth-estimation-on-nyu-depth-v2 | OrdinalEntropy | Delta u003c 1.25: 0.932 RMSE: 0.321 absolute relative error: 0.089 log 10: 0.039 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.