Abstract | ||
---|---|---|
In this paper, we proposed a method to express emotion in a synthetic speech according to musical knowledge. We define the pitch of each phoneme in speech using an accent dictionary in the first step. Next, we arrange parameters for this synthetic speech to express given emotion in a synthetic speech. Firstly, we arrange tempo, volume and pitch of speech. Next, we arrange pitch of each syllable and connection of syllables. Finally, we arrange pitch of each phoneme according to chord-scale. We decide how to arrange these features according to knowledge from empirical researches about expression emotion in music. In order to evaluate this approach, we did an experiment. The experimental results saw that there was difference between emotion estimated by examinees and emotion tried to express by our approach. However, we could divide emotion in speech into positive and negative. So, we evaluated that the possibility of our approach to express emotion in synthetic speech. |
Year | DOI | Venue |
---|---|---|
2011 | 10.3233/978-1-60750-831-1-305 | Frontiers in Artificial Intelligence and Applications |
Keywords | Field | DocType |
Speech Synthesize,Expression Emotion,Chord-Scales,Musical Knowledge | Systems engineering,Cognitive science,Musical,Computer science | Conference |
Volume | ISSN | Citations |
231 | 0922-6389 | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Masaki Kurematsu | 1 | 16 | 4.74 |
Hiroki Chiba | 2 | 1 | 0.77 |
Hamido Fujita | 3 | 2644 | 185.03 |
Jun Hakura | 4 | 85 | 15.06 |