Command Palette
Search for a command to run...
EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning
Jaeyeon Kim Jaeyoon Jung Jinjoo Lee Sang Hoon Woo

Abstract
We propose EnCLAP, a novel framework for automated audio captioning. EnCLAP employs two acoustic representation models, EnCodec and CLAP, along with a pretrained language model, BART. We also introduce a new training objective called masked codec modeling that improves acoustic awareness of the pretrained language model. Experimental results on AudioCaps and Clotho demonstrate that our model surpasses the performance of baseline models. Source code will be available at https://github.com/jaeyeonkim99/EnCLAP . An online demo is available at https://huggingface.co/spaces/enclap-team/enclap .
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| audio-captioning-on-audiocaps | EnCLAP-large | CIDEr: 0.8029 METEOR: 0.2554 SPICE: 0.1879 SPIDEr: 0.4954 |
| audio-captioning-on-audiocaps | EnCLAP-base | CIDEr: 0.7795 METEOR: 0.2473 SPICE: 0.1863 SPIDEr: 0.4829 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.