Command Palette
Search for a command to run...

Abstract
We introduce Spirit LM, a foundation multimodal language model that freely mixes text and speech. Our model is based on a 7B pretrained text language model that we extend to the speech modality by continuously training it on text and speech units. Speech and text sequences are concatenated as a single stream of tokens, and trained with a word-level interleaving method using a small automatically-curated speech-text parallel corpus. Spirit LM comes in two versions: a Base version that uses speech phonetic units (HuBERT) and an Expressive version that models expressivity using pitch and style units in addition to the phonetic units. For both versions, the text is encoded with subword BPE tokens. The resulting model displays both the semantic abilities of text models and the expressive abilities of speech models. Additionally, we demonstrate that Spirit LM can learn new tasks in a few-shot fashion across modalities (i.e. ASR, TTS, Speech Classification). We make available model weights and inference code.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| language-modelling-on-2000-hub5-english | MMLU | 10-stage average accuracy: 10 |
| language-modelling-on-salmon | Spirit-LM (Expr.) | Background (Domain) Consistency: 55.0 Background (Random) Consistency: 64.0 Background Alignment: 59.5 Gender Consistency: 85.0 Room Consistency: 54.5 Sentiment Alignment: 52.0 Sentiment Consistency: 73.5 Speaker Consistency: 81.0 |
| language-modelling-on-salmon | Spirit-LM (base) | Background (Domain) Consistency: 53.5 Background (Random) Consistency: 55.5 Background Alignment: 51.5 Gender Consistency: 67.0 Room Consistency: 54.5 Sentiment Alignment: 48.0 Sentiment Consistency: 54.5 Speaker Consistency: 69.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.