HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Music Transformer

Cheng-Zhi Anna Huang; Ashish Vaswani; Jakob Uszkoreit; Noam Shazeer; Ian Simon; Curtis Hawthorne; Andrew M. Dai; Matthew D. Hoffman; Monica Dinculescu; Douglas Eck

Music Transformer

Abstract

Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length. We propose an algorithm that reduces their intermediate memory requirement to linear in the sequence length. This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long compositions (thousands of steps, four times the length modeled in Oore et al., 2018) with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-Competition, and obtain state-of-the-art results on the latter.

Code Repositories

harryboos/Auto-Music-Generation
tf
Mentioned in GitHub
scpark20/Music-GPT-2
tf
Mentioned in GitHub
dvruette/figaro
pytorch
Mentioned in GitHub
jason9693/musictransformer-pytorch
pytorch
Mentioned in GitHub
vvvm23/TchAIkovsky-Legacy
pytorch
Mentioned in GitHub
ololo123321/maestro
tf
Mentioned in GitHub
Chatha-Sphere/pno-ai
pytorch
Mentioned in GitHub
Jesplar/LSTM-MusicGenerator
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
music-modeling-on-jsb-choralesMusic Transformer
NLL: 0.335

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Music Transformer | Papers | HyperAI