Abstract | ||
---|---|---|
To make communication between users and machines more comfortable, we focus on facial expressions and automatically classify them into 4 expression candidates: "joy," "anger," "sadness," and "surprise." The classification uses features that correspond to expression-motion patterns, and then voice data is output based on classification results. When we output voice data, insufficiency in classification is taken into account. We choose the first and second expression candidates from classification results. To realize interactive communication between users and machines, information on these candidates is used when we access a voice database. The voice database contains voice data corresponding to emotions. |
Year | DOI | Venue |
---|---|---|
2005 | 10.20965/jaciii.2005.p0637 | JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS |
Keywords | Field | DocType |
interactive system, facial expression recognition, feature extraction, natural language | Facial expression recognition,Pattern recognition,Computer science,Feature extraction,Natural language,Artificial intelligence | Journal |
Volume | Issue | ISSN |
9 | 6 | 1343-0130 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yuyi Shang | 1 | 0 | 0.34 |
Mie Sato | 2 | 81 | 12.89 |
Masao Kasuga | 3 | 6 | 4.67 |