Abstract | ||
---|---|---|
Score-based generative models and diffusion probabilistic models have been successful at generating high-quality samples in continuous domains such as images and audio. However, due to their Langevin-inspired sampling mechanisms, their application to discrete and sequential data has been limited. In this work, we present a technique for training diffusion models on sequential data by parameterizing the discrete domain in the continuous latent space of a pre-trained variational autoencoder. Our method is non-autoregressive and learns to generate sequences of latent embeddings through the reverse process and offers parallel generation with a constant number of iterative refinement steps. We apply this technique to modeling symbolic music and show strong unconditional generation and post-hoc conditional infilling results compared to autoregressive language models operating over the same continuous embeddings. |
Year | Venue | DocType |
---|---|---|
2021 | ISMIR | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Gautam Mittal | 1 | 0 | 0.34 |
Jesse H. Engel | 2 | 326 | 20.21 |
Curtis Hawthorne | 3 | 27 | 4.39 |
Ian Simon | 4 | 675 | 46.26 |