Command Palette
Search for a command to run...
{Daniel Yang Te-Lin Wu Silvio Savarese Kuan Fang Joseph J. Lim}

Abstract
Watching expert demonstrations is an important way for humans and robots to reason about affordances of unseen objects. In this paper, we consider the problem of reasoning object affordances through the feature embedding of demonstration videos. We design the Demo2Vec model which learns to extract embedded vectors of demonstration videos and predicts the interaction region and the action label on a target image of the same object. We introduce the Online Product Review dataset for Affordance (OPRA) by collecting and labeling diverse YouTube product review videos. Our Demo2Vec model outperforms various recurrent neural network baselines on the collected dataset.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-to-image-affordance-grounding-on-opra | Demo2Vec | KLD: 2.34 Top-1 Action Accuracy: 40.79 |
| video-to-image-affordance-grounding-on-opra-1 | Demo2Vec | AUC-J: 0.85 KLD: 1.20 SIM: 0.48 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.