Title
Auditory Separation of a Conversation from Background via Attentional Gating.
Abstract
We present a model for separating a set of voices out of a sound mixture containing an unknown number of sources. Our Attentional Gating Network (AGN) uses a variable attentional context to specify which speakers in the mixture are of interest. The attentional context is specified by an embedding vector which modifies the processing of a neural network through an additive bias. Individual speaker embeddings are learned to separate a single speaker while superpositions of the individual speaker embeddings are used to separate sets of speakers. We first evaluate AGN on a traditional single speaker separation task and show an improvement of 9% with respect to comparable models. Then, we introduce a new task to separate an arbitrary subset of voices from a mixture of an unknown-sized set of voices, inspired by the human ability to separate a conversation of interest from background chatter at a cafeteria. We show that AGN is the only model capable of solving this task, performing only 7% worse than on the single speaker separation task.
Year
Venue
DocType
2019
arXiv: Sound
Journal
Volume
Citations 
PageRank 
abs/1905.10751
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Shariq Mobin111.05
Bruno A. Olshausen249366.79