Title
Text-Free Prosody-Aware Generative Spoken Language Modeling
Abstract
Speech pre-training has primarily demonstrated efficacy on classification tasks, while its capability of generating novel speech, similar to how GPT-2 can generate coherent paragraphs, has barely been explored. Generative Spoken Language Modeling (GSLM) (Lakhotia et al., 2021) is the only prior work addressing the generative aspects of speech pre-training, which replaces text with discovered phone-like units for language modeling and shows the ability to generate meaningful novel sentences. Unfortunately, despite eliminating the need of text, the units used in GSLM discard most of the prosodic information. Hence, GSLM fails to leverage prosody for better comprehension, and does not generate expressive speech. In this work, we present a prosody-aware generative spoken language model (pGSLM). It is composed of a multi-stream transformer language model (MS-TLM) of speech, represented as discovered unit and prosodic feature streams, and an adapted HiFi-GAN model converting MS-TLM outputs to waveforms. We devise a series of metrics for prosody modeling and generation, and re-use metrics from GSLM for content modeling. Experimental results show that the pGSLM can utilize prosody to improve both prosody and content modeling, and also generate natural, meaningful, and coherent speech given a spoken prompt.(1)
Year
DOI
Venue
2022
10.18653/v1/2022.acl-long.593
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)
DocType
Volume
Citations 
Conference
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2
PageRank 
References 
Authors
0.35
0
11
Name
Order
Citations
PageRank
Eugene Kharitonov1676.63
Ann B Lee260256.97
Adam Polyak3506.09
Yossi Adi4879.18
Jade Copet540.70
Kushal Lakhotia640.70
Tu Anh T. Nguyen7569.27
Morgane Rivière882.54
Abdel-rahman Mohamed93772266.13
Emmanuel Dupoux1023837.33
Wei-Ning Hsu1111513.93