HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text

Hassan Akbari Liangzhe Yuan Rui Qian Wei-Hong Chuang Shih-Fu Chang Yin Cui Boqing Gong

VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text

Abstract

We present a framework for learning multimodal representations from unlabeled data using convolution-free Transformer architectures. Specifically, our Video-Audio-Text Transformer (VATT) takes raw signals as inputs and extracts multimodal representations that are rich enough to benefit a variety of downstream tasks. We train VATT end-to-end from scratch using multimodal contrastive losses and evaluate its performance by the downstream tasks of video action recognition, audio event classification, image classification, and text-to-video retrieval. Furthermore, we study a modality-agnostic, single-backbone Transformer by sharing weights among the three modalities. We show that the convolution-free VATT outperforms state-of-the-art ConvNet-based architectures in the downstream tasks. Especially, VATT's vision Transformer achieves the top-1 accuracy of 82.1% on Kinetics-400, 83.6% on Kinetics-600, 72.7% on Kinetics-700, and 41.1% on Moments in Time, new records while avoiding supervised pre-training. Transferring to image classification leads to 78.7% top-1 accuracy on ImageNet compared to 64.7% by training the same Transformer from scratch, showing the generalizability of our model despite the domain gap between videos and images. VATT's audio Transformer also sets a new record on waveform-based audio event recognition by achieving the mAP of 39.4% on AudioSet without any supervised pre-training. VATT's source code is publicly available.

Benchmarks

BenchmarkMethodologyMetrics
action-classification-on-kinetics-400VATT-Large
Acc@1: 82.1
Acc@5: 95.5
action-classification-on-kinetics-600VATT-Large
Top-1 Accuracy: 83.6
Top-5 Accuracy: 96.6
action-classification-on-moments-in-timeVATT-Large
Top 1 Accuracy: 41.1
Top 5 Accuracy: 67.7
audio-classification-on-audiosetVATT-Base
AUC: 0.971
Test mAP: 0.394
d-prime: 2.895
zero-shot-video-retrieval-on-msr-vttVATT-MBS
text-to-video Median Rank: 49
text-to-video R@10: 29.7
zero-shot-video-retrieval-on-youcook2VATT-MBS
text-to-video Mean Rank: 13
text-to-video R@10: 45.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text | Papers | HyperAI