Abstract | ||
---|---|---|
Sound zones are generated to provide independent audio reproduction to multiple people in the same room using loudspeakers. In this article, sound zones are formulated in terms of a moving horizon framework. This framework allows the reproduction scenario to be time-varying and adapt to changes e.g. in the location of the zones or in the audio signal. The framework is tested using both simulated and measured room impulse responses from eight loudspeakers in a rectangular room. The performance is investigated using signals limited between 35–500 Hz, but the framework is not limited to a particular frequency range. The experimental results show that it is possible to gain on the order of 4 dB higher separation between the zones using the proposed framework, relative to a conventional time-invariant solution. This gain arises from knowledge about the audio content currently being reproduced in the zones, and it is obtained without deteriorating the reproduction accuracy or increasing the signal energy injected into the loudspeakers. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/TASLP.2019.2951995 | IEEE/ACM Transactions on Audio, Speech, and Language Processing |
Keywords | Field | DocType |
Loudspeakers,Finite impulse response filters,Frequency control,Microphones,Speech processing,Indexes,Information filtering | Computer science,Horizon,Speech recognition | Journal |
Volume | Issue | ISSN |
28 | 1 | 2329-9290 |
Citations | PageRank | References |
0 | 0.34 | 2 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Martin Bo Møller | 1 | 0 | 0.68 |
Jan Østergaard | 2 | 201 | 28.38 |