HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation

Venkatraman Narayanan Bala Murali Manoghar Vishnu Sashank Dorbala Dinesh Manocha Aniket Bera

ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation

Abstract

We present ProxEmo, a novel end-to-end emotion prediction algorithm for socially aware robot navigation among pedestrians. Our approach predicts the perceived emotions of a pedestrian from walking gaits, which is then used for emotion-guided navigation taking into account social and proxemic constraints. To classify emotions, we propose a multi-view skeleton graph convolution-based model that works on a commodity camera mounted onto a moving robot. Our emotion recognition is integrated into a mapless navigation scheme and makes no assumptions about the environment of pedestrian motion. It achieves a mean average emotion prediction precision of 82.47% on the Emotion-Gait benchmark dataset. We outperform current state-of-art algorithms for emotion recognition from 3D gaits. We highlight its benefits in terms of navigation in indoor scenes using a Clearpath Jackal robot.

Code Repositories

vijay4313/proxemo
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
emotion-classification-on-ewalkBaseline (Vanilla LSTM) [Ewalk]
Accuracy: 55.47
emotion-classification-on-ewalkSTEP [bhattacharya2019step]
Accuracy: 78.24
emotion-classification-on-ewalkProxEmo (ours)
Accuracy: 82.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation | Papers | HyperAI