Title
MashtaCycle: On-Stage Improvised Audio Collage by Content-Based Similarity and Gesture Recognition.
Abstract
In this paper we present the outline of a performance in-progress. It brings together the skilled musical practices from Belgian audio collagist Gauthier Keyaerts aka Very Mash'ta; and the realtime, content-based audio browsing capabilities of the AudioCycle and Loop-Jam applications developed by the remaining authors. The tool derived from AudioCycle named MashtaCycle aids the preparation of collections of stem audio loops before performances by extracting content-based features (for instance timbre) used for the positioning of these sounds on a 2D visual map. The tool becomes an embodied on-stage instrument, based on a user interface which uses a depth-sensing camera, and augmented with the public projection of the 2D map. The camera tracks the position of the artist within the sensing area to trigger sounds similarly to the LoopJam installation. It also senses gestures from the performer interpreted with the Full Body Interaction (FUBI) framework, allowing to apply sound effects based on bodily movements. MashtaCycle blurs the boundary between performance and preparation, navigation and improvisation, installations and concerts.
Year
DOI
Venue
2013
10.1007/978-3-319-03892-6_14
Lecture Notes of the Institute for Computer Sciences Social Informatics and Telecommunications Engineering
Keywords
Field
DocType
Human-music interaction,audio collage,content-based similarity,gesture recognition,depth cameras,digital audio effects
Improvisation,Musical,Computer science,Gesture,Gesture recognition,Embodied cognition,Speech recognition,Performing arts,User interface,Multimedia,Timbre
Conference
Volume
ISSN
Citations 
124
1867-8211
0
PageRank 
References 
Authors
0.34
9
9