HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip Reading

Kim Minsu ; Yeo Jeong Hun ; Ro Yong Man

Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip
  Reading

Abstract

Recognizing speech from silent lip movement, which is called lip reading, isa challenging task due to 1) the inherent information insufficiency of lipmovement to fully represent the speech, and 2) the existence of homophenes thathave similar lip movement with different pronunciations. In this paper, we tryto alleviate the aforementioned two challenges in lip reading by proposing aMulti-head Visual-audio Memory (MVM). Firstly, MVM is trained with audio-visualdatasets and remembers audio representations by modelling theinter-relationships of paired audio-visual representations. At the inferencestage, visual input alone can extract the saved audio representation from thememory by examining the learned inter-relationships. Therefore, the lip readingmodel can complement the insufficient visual information with the extractedaudio representations. Secondly, MVM is composed of multi-head key memories forsaving visual features and one value memory for saving audio knowledge, whichis designed to distinguish the homophenes. With the multi-head key memories,MVM extracts possible candidate audio features from the memory, which allowsthe lip reading model to consider the possibility of which pronunciations canbe represented from the input lip movement. This also can be viewed as anexplicit implementation of the one-to-many mapping of viseme-to-phoneme.Moreover, MVM is employed in multi-temporal levels to consider the context whenretrieving the memory and distinguish the homophenes. Extensive experimentalresults verify the effectiveness of the proposed method in lip reading and indistinguishing the homophenes.

Benchmarks

BenchmarkMethodologyMetrics
lipreading-on-lip-reading-in-the-wild3D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory
Top-1 Accuracy: 88.5
lipreading-on-lrs2Multi-head Visual-Audio Memory
Word Error Rate (WER): 44.5
lipreading-on-lrw-10003D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory
Top-1 Accuracy: 53.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip Reading | Papers | HyperAI