Abstract | ||
---|---|---|
Capturing visual social cues in social conversations can prove a difficult task for visually impaired people. Their lack of ability to see facial expressions and body postures expressed by their conversation partners can lead them to misunderstand or misjudge the social situations. This paper presents a system that infers social cues from streaming video recorded by a pair of imaging glasses and feedbacks the inferred social cues to the users. We have implemented the prototype and evaluated the effectiveness and usefulness of the system in real-world conversation situations. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1145/2968219.2968260 | UbiComp Adjunct |
Keywords | Field | DocType |
Affective Computing, Imaging glasses, Emotion Recognition | Conversation,Social cue,Emotion recognition,Computer science,Human–computer interaction,Facial expression,Affective computing,Multimedia | Conference |
Citations | PageRank | References |
1 | 0.35 | 4 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Lauren Murray | 1 | 1 | 0.35 |
Philip Hands | 2 | 1 | 0.35 |
Ross Goucher | 3 | 1 | 0.35 |
Juan Ye | 4 | 125 | 9.82 |