Title
Robots That Take Advantage Of Human Trust
Abstract
Humans often assume that robots are rational. We believe robots take optimal actions given their objective; hence, when we are uncertain about what the robot's objective is, we interpret the robot's actions as optimal with respect to our estimate of its objective. This approach makes sense when robots straightforwardly optimize their objective, and enables humans to learn what the robot is trying to achieve. However, our insight is that-when robots are aware that humans learn by trusting that the robot actions are rational-intelligent robots do not act as the human expects; instead, they take advantage of the human's trust, and exploit this trust to more efficiently optimize their own objective. In this paper, we formally model instances of human-robot interaction (HRI) where the human does not know the robot's objective using a two-player game. We formulate different ways in which the robot can model the uncertain human, and compare solutions of this game when the robot has conservative, optimistic, rational, and trusting human models. In an offline linear-quadratic case study and a real-time user study, we show that trusting human models can naturally lead to communicative robot behavior, which influences end-users and increases their involvement.
Year
DOI
Venue
2019
10.1109/IROS40897.2019.8968564
2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)
Field
DocType
ISSN
Computer science,Control engineering,Exploit,Human–computer interaction,Behavior-based robotics,Robot
Conference
2153-0858
Citations 
PageRank 
References 
0
0.34
0
Authors
2
Name
Order
Citations
PageRank
dylan p losey15210.77
Dorsa Sadigh217526.40