Abstract | ||
---|---|---|
Immersive virtual environments offer the possibility of natu- ral interaction within a virtual scene that is familiar to users because it is based on everyday activity. The use of such en- vironments for the representation and control of interactive musical systems remains largely unexplored. We propose a paradigm for working with sound and music in a physical context, and develop a framework that allows for the creation of spatialized audio scenes. The framework uses structures called soundNodes, soundConnections, and DSP graphs to organize audio scene content, and offers greater control com- pared to other representations. 3-D simulation with physical modelling is used to define how audio is processed, and offers a high degree of expressive interaction with sound, particu- larly when the rules of sound propagation are bent. Sound sources and sinks are modelled within the scene along with the user/listener/performer, creating a navigable 3-D sonic space for sound-engineering, musical creation, listening, and performance. |
Year | Venue | Field |
---|---|---|
2006 | ICMC | Graph,Digital signal processing,Physical context,Physical interaction,Physical modelling,Musical,Computer science,Active listening,Speech recognition,Human–computer interaction,Immersion (virtual reality) |
DocType | Citations | PageRank |
Conference | 5 | 0.70 |
References | Authors | |
5 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mike Wozniewski | 1 | 20 | 3.67 |
Zack Settel | 2 | 42 | 8.84 |
Jeremy R. Cooperstock | 3 | 449 | 102.09 |