HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

AssembleNet++: Assembling Modality Representations via Attention Connections

Michael S. Ryoo AJ Piergiovanni Juhana Kangaspunta Anelia Angelova

AssembleNet++: Assembling Modality Representations via Attention Connections

Abstract

We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network. A new network component named peer-attention is introduced, which dynamically learns the attention weights using another block or input modality. Even without pre-training, our models outperform the previous work on standard public activity recognition datasets with continuous videos, establishing new state-of-the-art. We also confirm that our findings of having neural connections from the object modality and the use of peer-attention is generally applicable for different existing architectures, improving their performances. We name our model explicitly as AssembleNet++. The code will be available at: https://sites.google.com/corp/view/assemblenet/

Benchmarks

BenchmarkMethodologyMetrics
action-classification-on-charadesAssembleNet++ 50
MAP: 59.8
action-classification-on-charadesAssembleNet++ 50 without object
MAP: 54.98
action-classification-on-toyota-smarthomeAssembleNet++
CS: 63.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
AssembleNet++: Assembling Modality Representations via Attention Connections | Papers | HyperAI