Abstract | ||
---|---|---|
Music induces different kinds of emotions in listeners. Previous research on music and emotions discovered that different music features can be used for classifying how certain music can induce emotions in an individual. We propose a method for collecting electroencephalograph (EEG) data from subjects listening to emotion-inducing music. The EEG data is used to continuously label high-level music features with continuous-valued emotion annotations using the emotion spectrum analysis method. The music features are extracted from MIDI files using a windowing technique. We highlight the results of two emotion models for stress and relaxation which were constructed using C4.5. Evaluations of the models using 10-fold cross validation give promising results with an average relative absolute error of 6.54% using a window length of 38.4 seconds. |
Year | DOI | Venue |
---|---|---|
2013 | 10.20965/jaciii.2013.p0362 | JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS |
Keywords | Field | DocType |
music emotion recognition, machine learning, electroencephalograph | Computer science,Speech recognition,Music emotion recognition,Electroencephalography | Journal |
Volume | Issue | ISSN |
17 | 3 | 1343-0130 |
Citations | PageRank | References |
3 | 0.40 | 8 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Rafael Cabredo | 1 | 14 | 3.43 |
Roberto S. Legaspi | 2 | 42 | 8.96 |
Paul Salvador Inventado | 3 | 16 | 7.35 |
Masayuki Numao | 4 | 390 | 89.56 |