Abstract | ||
---|---|---|
Traditional uses of virtual audio environments tend to focus on perceptually accurate acoustic representations. Though spatialization of sound sources is important, it is necessary to leverage control of the sonic representation when considering musical applications. The proposed framework allows for the creation of perceptually immersive scenes that function as musical instruments. Loudspeakers and microphones are modeled within the scene along with the listener/performer, creating a navigable 3D sonic space where sound sources and sinks process audio according to user-defined spatial mappings. |
Year | Venue | Keywords |
---|---|---|
2006 | NIME | proposed framework,sinks process audio,musical instrument,perceptually immersive scene,sonic space,immersive spatial audio performance,sound source,sonic representation,musical application,virtual audio environment,perceptually accurate acoustic representation,auditory display |
Field | DocType | ISBN |
Musical,Computer science,Human–computer interaction,Immersion (virtual reality),Auditory display,Loudspeaker,Multimedia,Spatialization | Conference | 2-84426-314-3 |
Citations | PageRank | References |
7 | 0.84 | 9 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mike Wozniewski | 1 | 20 | 3.67 |
Zack Settel | 2 | 42 | 8.84 |
Jeremy R. Cooperstock | 3 | 449 | 102.09 |