Title
When Humans Aren't Optimal - Robots that Collaborate with Risk-Aware Humans.
Abstract
In order to collaborate safely and efficiently, robots need to anticipate how their human partners will behave. Some of today's robots model humans as if they were also robots, and assume users are always optimal. Other robots account for human limitations, and relax this assumption so that the human is noisily rational. Both of these models make sense when the human receives deterministic rewards: i.e., gaining either $100 or $130 with certainty. But in real-world scenarios, rewards are rarely deterministic. Instead, we must make choices subject to risk and uncertainty-and in these settings, humans exhibit a cognitive bias towards suboptimal behavior. For example, when deciding between gaining $100 with certainty or $130 only 80% of the time, people tend to make the risk-averse choice-even though it leads to a lower expected gain! In this paper, we adopt a well-known Risk-Aware human model from behavioral economics called Cumulative Prospect Theory and enable robots to leverage this model during human-robot interaction (HRI). In our user studies, we offer supporting evidence that the Risk-Aware model more accurately predicts suboptimal human behavior. We find that this increased modeling accuracy results in safer and more efficient human-robot collaboration. Overall, we extend existing rational human models so that collaborative robots can anticipate and plan around suboptimal human behavior during HRI.
Year
DOI
Venue
2020
10.1145/3319502.3374832
HRI
Keywords
DocType
ISSN
Economics,Computational modeling,Human-robot interaction,Collaboration,Predictive models,Robot sensing systems,Probabilistic logic
Conference
2167-2121
ISBN
Citations 
PageRank 
978-1-4503-6746-2
2
0.40
References 
Authors
17
6
Name
Order
Citations
PageRank
Minae Kwon120.74
Erdem Biyik276.35
Aditi Talati320.40
Karan Bhasin420.40
dylan p losey55210.77
Dorsa Sadigh617526.40