Abstract | ||
---|---|---|
This paper presents a new approach to sound composition for soundtrack composers and sound designers. We propose a tool for usable sound manipulation and composition that targets sound variety and expressive rendering of the composition. We first automatically segment audio recordings into atomic grains which are displayed on our navigation tool according to signal properties. To perform the synthesis, the user selects one recording as model for rhythmic pattern and timbre evolution, and a set of audio grains. Our synthesis system then processes the chosen sound material to create new sound sequences based on onset detection on the recording model and similarity measurements between the model and the selected grains. With our method, we can create a large variety of sound events such as those encountered in virtual environments or other training simulations, but also sound sequences that can be integrated in a music composition. We present a usability-minded interface that allows to manipulate and tune sound sequences in an appropriate way for sound design. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1145/1859799.1859820 | Audio Mostly Conference |
Keywords | Field | DocType |
tune sound sequence,sound variety,sound material,sound design,usable sound manipulation,audio creation,sound event,sound sequence,sound designer,new sound,music composition,virtual environment,audio analysis | USable,Sound design,Computer science,Musical composition,Speech recognition,User Friendly,Rendering (computer graphics),Timbre | Conference |
Citations | PageRank | References |
0 | 0.34 | 1 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Cécile Picard | 1 | 3 | 1.09 |
Christian Frisson | 2 | 40 | 10.74 |
Jean Vanderdonckt | 3 | 2917 | 276.94 |
damien tardieu | 4 | 23 | 3.79 |
Thierry Dutoit | 5 | 1006 | 123.84 |