HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation

Sungkyun Chang; Emmanouil Benetos; Holger Kirchhoff; Simon Dixon

YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation

Abstract

Multi-instrument music transcription aims to convert polyphonic music recordings into musical scores assigned to each instrument. This task is challenging for modeling as it requires simultaneously identifying multiple instruments and transcribing their pitch and precise timing, and the lack of fully annotated data adds to the training difficulties. This paper introduces YourMT3+, a suite of models for enhanced multi-instrument music transcription based on the recent language token decoding approach of MT3. We enhance its encoder by adopting a hierarchical attention transformer in the time-frequency domain and integrating a mixture of experts. To address data limitations, we introduce a new multi-channel decoding method for training with incomplete annotations and propose intra- and cross-stem augmentation for dataset mixing. Our experiments demonstrate direct vocal transcription capabilities, eliminating the need for voice separation pre-processors. Benchmarks across ten public datasets show our models' competitiveness with, or superiority to, existing transcription models. Further testing on pop music recordings highlights the limitations of current models. Fully reproducible code and datasets are available with demos at \url{https://github.com/mimbres/YourMT3}.

Code Repositories

mimbres/yourmt3
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
multi-instrument-music-transcription-onYourMT3+ (YPTF.MoE+M)
Multi F1: 74.84
multi-instrument-music-transcription-onMT3 (colab)
Multi F1: 57.69
multi-instrument-music-transcription-onMT3
Multi F1: 62
multi-instrument-music-transcription-on-urmpMT3
Multi F1: 59
multi-instrument-music-transcription-on-urmpYourMT3+ (YPTF.MoE+M)
Multi F1: 67.98
music-transcription-on-maestroYourMT3+ (YPTF.MoE+M) noPS
Onset F1: 96.98
music-transcription-on-maestroYourMT3+ (YPTF.MoE+M)
Onset F1: 96.52
music-transcription-on-mapsYourMT3+ (YPTF.MoE+M, unseen) noPS
Onset F1: 88.73
music-transcription-on-mapsYourMT3+ (YPTF+S, unseen)
Onset F1: 88.37
music-transcription-on-slakh2100MT3 (colab)
Onset F1: 75.2
note-level F-measure-no-offset (Fno): 0.752
music-transcription-on-slakh2100YourMT3+ (YPTF.MoE+M)
Onset F1: 84.56
note-level F-measure-no-offset (Fno): 0.8456
music-transcription-on-slakh2100PerceiverTF
Onset F1: 81.9
note-level F-measure-no-offset (Fno): 0.819
music-transcription-on-urmpMT3
Onset F1: 77
music-transcription-on-urmpYourMT3+ (YPTF.MoE+M)
Onset F1: 81.79

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp