Command Palette
Search for a command to run...
Jianwei Yu Shi-Xiong Zhang Jian Wu Shahram Ghorbani Bo Wu Shiyin Kang Shansong Liu Xunying Liu Helen Meng Dong Yu

Abstract
Automatic recognition of overlapped speech remains a highly challenging task to date. Motivated by the bimodal nature of human speech perception, this paper investigates the use of audio-visual technologies for overlapped speech recognition. Three issues associated with the construction of audio-visual speech recognition (AVSR) systems are addressed. First, the basic architecture designs i.e. end-to-end and hybrid of AVSR systems are investigated. Second, purposefully designed modality fusion gates are used to robustly integrate the audio and visual features. Third, in contrast to a traditional pipelined architecture containing explicit speech separation and recognition components, a streamlined and integrated AVSR system optimized consistently using the lattice-free MMI (LF-MMI) discriminative criterion is also proposed. The proposed LF-MMI time-delay neural network (TDNN) system establishes the state-of-the-art for the LRS2 dataset. Experiments on overlapped speech simulated from the LRS2 dataset suggest the proposed AVSR system outperformed the audio only baseline LF-MMI DNN system by up to 29.98\% absolute in word error rate (WER) reduction, and produced recognition performance comparable to a more complex pipelined system. Consistent performance improvements of 4.89\% absolute in WER reduction over the baseline AVSR system using feature fusion are also obtained.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| audio-visual-speech-recognition-on-lrs2 | LF-MMI TDNN | Test WER: 5.9 |
| automatic-speech-recognition-on-lrs2 | LF-MMI TDNN | Test WER: 6.7 |
| lipreading-on-lrs2 | LF-MMI TDNN | Word Error Rate (WER): 48.86 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.