HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Grounded Human-Object Interaction Hotspots from Video

Tushar Nagarajan; Christoph Feichtenhofer; Kristen Grauman

Grounded Human-Object Interaction Hotspots from Video

Abstract

Learning how to interact with objects is an important step towards embodied visual intelligence, but existing techniques suffer from heavy supervision or sensing requirements. We propose an approach to learn human-object interaction "hotspots" directly from video. Rather than treat affordances as a manually supervised semantic segmentation task, our approach learns about interactions by watching videos of real human behavior and anticipating afforded actions. Given a novel image or video, our model infers a spatial hotspot map indicating how an object would be manipulated in a potential interaction-- even if the object is currently at rest. Through results with both first and third person video, we show the value of grounding affordances in real human-object interactions. Not only are our weakly supervised hotspots competitive with strongly supervised affordance methods, but they can also anticipate object interaction for novel object categories.

Code Repositories

Tushar-N/interaction-hotspots
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-to-image-affordance-grounding-on-epicHotspot
AUC-J: 0.79
KLD: 1.26
SIM: 0.40
video-to-image-affordance-grounding-on-opra-1Hotspot
AUC-J: 0.81
KLD: 1.47
SIM: 0.36

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Grounded Human-Object Interaction Hotspots from Video | Papers | HyperAI