Abstract | ||
---|---|---|
Personality is a vital factor in understanding acceptance, trust or emotional attachment to a robot. From R2D2 to WALL-E, there is a rich history of robots in films using semantic free utterances (SFUs) sounds (such as squeaks, clicks and tones) as audio gestures to communicate and convey emotion and personality. However, unlike in a film where an actor can pretend to understand non-verbal noises, in a practical application synthesised speech is often used to communicate information, intention and status. Here we present a pilot study exploring the impact of mixing speech synthesis with SFUs on perceived personality. In a listening test, subjects were presented with synthesised short utterances with and without SFUs, together with a picture of the agent as a tabletop social robot or a young man. Both picture and SFUs had an impact on perceived personality. However, no interaction was seen between SFUs and picture suggesting the listeners failed to fuse the perceptions of the two audio elements, perceiving the SFUs as background noises rather than audio gestures generated by the agent.
|
Year | DOI | Venue |
---|---|---|
2020 | 10.1145/3371382.3378330 | HRI '20: ACM/IEEE International Conference on Human-Robot Interaction
Cambridge
United Kingdom
March, 2020 |
Keywords | DocType | ISSN |
speech synthesis, social robots, personification | Conference | 2167-2121 |
ISBN | Citations | PageRank |
978-1-4503-7057-8 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Matthew P. Aylett | 1 | 936 | 237.25 |
Yolanda Vazquez-Alvarez | 2 | 115 | 11.67 |
Skaiste Butkute | 3 | 0 | 0.34 |