HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation

Li Ding; Chenliang Xu

TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation

Abstract

Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.

Benchmarks

BenchmarkMethodologyMetrics
action-segmentation-on-jigsawsTricorNet
Accuracy: 82.9
Edit Distance: 86.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation | Papers | HyperAI