Command Palette
Search for a command to run...
Pruss Shahaf ; Alper Morris ; Averbuch-Elor Hadar

Abstract
Images depicting complex, dynamic scenes are challenging to parseautomatically, requiring both high-level comprehension of the overall situationand fine-grained identification of participating entities and theirinteractions. Current approaches use distinct methods tailored to sub-taskssuch as Situation Recognition and detection of Human-Human and Human-ObjectInteractions. However, recent advances in image understanding have oftenleveraged web-scale vision-language (V&L) representations to obviatetask-specific engineering. In this work, we propose a framework for dynamicscene understanding tasks by leveraging knowledge from modern, frozen V&Lrepresentations. By framing these tasks in a generic manner - as predicting andparsing structured text, or by directly concatenating representations to theinput of existing models - we achieve state-of-the-art results while using aminimal number of trainable parameters relative to existing approaches.Moreover, our analysis of dynamic knowledge of these representations shows thatrecent, more powerful representations effectively encode dynamic scenesemantics, making this approach newly possible.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| grounded-situation-recognition-on-swig | Ours (CoFormer+) | Top-1 Verb: 58.88 Top-1 Verb u0026 Grounded-Value: 41.28 Top-1 Verb u0026 Value: 51.10 Top-5 Verbs u0026 Grounded-Value: 58.23 |
| human-object-interaction-detection-on-hico | Ours (PViC+) | mAP: 46.49 |
| situation-recognition-on-imsitu | Ours | Top-1 Verb: 58.88 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.