Rostislav KolobovOlga OkhapkinaOlga OmelchishinaAndrey PlatunovRoman BedyakinVyacheslav MoshkinDmitry MenshikovNikolay Mikhaylovskiy

摘要
自动语音识别(ASR)系统的性能在不同应用场景中表现差异显著。然而,目前厂商和研究机构报告的ASR性能结果通常局限于少数简单应用场景(如有声书、TED演讲)或专有数据集。为弥补这一空白,我们发布了一个开源的10小时ASR系统评估数据集——NTR MediaSpeech,涵盖西班牙语、法语、土耳其语和阿拉伯语四种语言。该数据集从各语言媒体机构的官方YouTube频道中采集,并经过人工转写。我们估计该数据集的词错误率(WER)低于5%。我们对多种商用及开源ASR系统进行了基准测试,并公布了测试结果。此外,我们还开源了每种语言的基准模型——QuartzNet模型,以促进后续研究与开发。
代码仓库
NTRLab/MediaSpeech
官方
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| speech-recognition-on-mediaspeech | Wit | WER for Arabic: 0.2333 WER for French: 0.1759 WER for Spanish: 0.0879 WER for Turkish: 0.0768 |
| speech-recognition-on-mediaspeech | Silero | WER for Spanish: 0.3070 |
| speech-recognition-on-mediaspeech | Quartznet | WER for Arabic: 0.1300 WER for French: 0.1915 WER for Spanish: 0.1826 WER for Turkish: 0.1422 |
| speech-recognition-on-mediaspeech | Azure | WER for Arabic: 0.3016 WER for French: 0.1683 WER for Spanish: 0.1296 WER for Turkish: 0.2296 |
| speech-recognition-on-mediaspeech | VOSK | WER for Arabic: 0.3085 WER for French: 0.2111 WER for Spanish: 0.1970 WER for Turkish: 0.3050 |
| speech-recognition-on-mediaspeech | WER for Arabic: 0.4464 WER for French: 0.2385 WER for Spanish: 0.2176 WER for Turkish: 0.2707 | |
| speech-recognition-on-mediaspeech | Deepspeech | WER for French: 0.4741 WER for Spanish: 0.4236 |
| speech-recognition-on-mediaspeech | wav2vec | WER for Arabic: 0.9596 WER for French: 0.3113 WER for Spanish: 0.2469 WER for Turkish: 0.5812 |