HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Revisiting Unreasonable Effectiveness of Data in Deep Learning Era

Chen Sun; Abhinav Shrivastava; Saurabh Singh; Abhinav Gupta

Revisiting Unreasonable Effectiveness of Data in Deep Learning Era

Abstract

The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10x or 100x? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between `enormous data' and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pre-training) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-the-art results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets.

Code Repositories

Tencent/tencent-ml-images
tf
Mentioned in GitHub
Ranja-S/sensitivity
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenetResNet-101 (JFT-300M Finetuning)
Top 1 Accuracy: 79.2%
object-detection-on-cocoFaster R-CNN (ImageNet+300M)
AP50: 58
AP75: 40.1
APL: 51.2
APM: 41.1
APS: 17.5
Hardware Burden:
Operations per network pass:
box mAP: 37.4
pose-estimation-on-coco-test-devFaster R-CNN (ImageNet+300M)
AP: 64.4
AP50: 85.7
AP75: 70.7
APL: 69.8
APM: 61.8
semantic-segmentation-on-pascal-voc-2007DeepLabv3 (ImageNet+300M)
Mean IoU: 81.3
semantic-segmentation-on-pascal-voc-2012-valDeepLabv3 (ImageNet+300M)
mIoU: 76.5%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era | Papers | HyperAI