Command Palette
Search for a command to run...
Hussein Noureldien ; Gavves Efstratios ; Smeulders Arnold W. M.

Abstract
Many human activities take minutes to unfold. To represent them, relatedworks opt for statistical pooling, which neglects the temporal structure.Others opt for convolutional methods, as CNN and Non-Local. While successful inlearning temporal concepts, they are short of modeling minutes-long temporaldependencies. We propose VideoGraph, a method to achieve the best of twoworlds: represent minutes-long human activities and learn their underlyingtemporal structure. VideoGraph learns a graph-based representation for humanactivities. The graph, its nodes and edges are learned entirely from videodatasets, making VideoGraph applicable to problems without node-levelannotation. The result is improvements over related works on benchmarks:Epic-Kitchen and Breakfast. Besides, we demonstrate that VideoGraph is able tolearn the temporal structure of human activities in minutes-long videos.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| long-video-activity-recognition-on-breakfast | VideoGraph (I3D-K400-Pretrain-feature) | mAP: 63.14 |
| video-classification-on-breakfast | VideoGraph | Accuracy (%): 69.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.