Abstract | ||
---|---|---|
Acoustic communication between humans and robots, via speech, relies on initially processing verbal utterances into text. In sophisticated scenarios the semantic meaning of the utterances is extracted so responses are given within context (e.g. conversational agents), in simpler scenarios these utterances can be mapped to direct responses or actionable tasks the agent can perform. On its basic form, speech are acoustic signals that encode linguistic messages, words, in correlation to a set of acoustic properties such as frequency range, harmonic-to-noise ratio and duration. Nonetheless, it is said that language is part of an individual's identity and an integral part of our social dynamics, thus developing and understanding a language has an intrinsic impact. We investigate a communication strategy in which unique, covert, tonal languages are dynamically generated for a given robot-human pair. We explore the application of such language in different scenarios - social and tactical, the learning efforts from the human, and discuss a near-distant future where robot agents can autonomously generate such tonal languages which may create stronger bonds and robot-human relationships.
|
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/HRI.2019.8673114 | 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) |
Keywords | Field | DocType |
Robots,Acoustics,Cryptography,Linguistics,Dogs,Human-robot interaction | ENCODE,Cipher,Cryptography,Computer science,Covert,Human–computer interaction,Social dynamics,Robot,Human–robot interaction,Instrumental and intrinsic value | Conference |
ISSN | ISBN | Citations |
2167-2121 | 978-1-5386-8555-6 | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Irvin Steve Cardenas | 1 | 1 | 3.05 |
HyunJae Jeong | 2 | 0 | 0.34 |
Jong-Hoon Kim | 3 | 1 | 3.73 |
Yongseok Chi | 4 | 0 | 0.34 |