Command Palette
Search for a command to run...
Tushar Nagarajan; Christoph Feichtenhofer; Kristen Grauman

Abstract
Learning how to interact with objects is an important step towards embodied visual intelligence, but existing techniques suffer from heavy supervision or sensing requirements. We propose an approach to learn human-object interaction "hotspots" directly from video. Rather than treat affordances as a manually supervised semantic segmentation task, our approach learns about interactions by watching videos of real human behavior and anticipating afforded actions. Given a novel image or video, our model infers a spatial hotspot map indicating how an object would be manipulated in a potential interaction-- even if the object is currently at rest. Through results with both first and third person video, we show the value of grounding affordances in real human-object interactions. Not only are our weakly supervised hotspots competitive with strongly supervised affordance methods, but they can also anticipate object interaction for novel object categories.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-to-image-affordance-grounding-on-epic | Hotspot | AUC-J: 0.79 KLD: 1.26 SIM: 0.40 |
| video-to-image-affordance-grounding-on-opra-1 | Hotspot | AUC-J: 0.81 KLD: 1.47 SIM: 0.36 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.