HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Self-Supervised Learning of Video-Induced Visual Invariances

Michael Tschannen Josip Djolonga Marvin Ritter Aravindh Mahendran Xiaohua Zhai Neil Houlsby Sylvain Gelly Mario Lucic

Self-Supervised Learning of Video-Induced Visual Invariances

Abstract

We propose a general framework for self-supervised learning of transferable visual representations based on Video-Induced Visual Invariances (VIVI). We consider the implicit hierarchy present in the videos and make use of (i) frame-level invariances (e.g. stability to color and contrast perturbations), (ii) shot/clip-level invariances (e.g. robustness to changes in object orientation and lighting conditions), and (iii) video-level invariances (semantic relationships of scenes across shots/clips), to define a holistic self-supervised loss. Training models using different variants of the proposed framework on videos from the YouTube-8M (YT8M) data set, we obtain state-of-the-art self-supervised transfer learning results on the 19 diverse downstream tasks of the Visual Task Adaptation Benchmark (VTAB), using only 1000 labels per task. We then show how to co-train our models jointly with labeled images, outperforming an ImageNet-pretrained ResNet-50 by 0.8 points with 10x fewer labeled images, as well as the previous best supervised model by 3.7 points using the full ImageNet data set.

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-vtab-1k-1VIVI-Ex4-Co
Top-1 Accuracy: 70.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Self-Supervised Learning of Video-Induced Visual Invariances | Papers | HyperAI