Command Palette
Search for a command to run...
OPT: Omni-Perception Pre-Trainer for Cross-Modal Understanding and Generation

Abstract
In this paper, we propose an Omni-perception Pre-Trainer (OPT) for cross-modal understanding and generation, by jointly modeling visual, text and audio resources. OPT is constructed in an encoder-decoder framework, including three single-modal encoders to generate token-based embeddings for each modality, a cross-modal encoder to encode the correlations among the three modalities, and two cross-modal decoders to generate text and image respectively. For the OPT's pre-training, we design a multi-task pretext learning scheme to model multi-modal resources from three different data granularities, \ie, token-, modality-, and sample-level modeling, through which OPT learns to align and translate among different modalities. The pre-training task is carried out on a large amount of image-text-audio triplets from Open Images. Experimental results show that OPT can learn strong image-text-audio multi-modal representations and achieve promising results on a variety of cross-modal understanding and generation tasks.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| image-retrieval-on-localized-narratives | OPT | Text-to-image R@1: 0.4196 Text-to-image R@10: 0.8126 Text-to-image R@5: 0.72 |
| image-to-text-retrieval-on-localized | OPT | Image-to-text R@1: 0.394 Image-to-text R@10: 0.8256 Image-to-text R@5: 0.7194 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.