Title
Speeding up Inference with User Simulators through Policy Modulation
Abstract
ABSTRACT The simulation of user behavior with deep reinforcement learning agents has shown some recent success. However, the inverse problem, that is, inferring the free parameters of the simulator from observed user behaviors, remains challenging to solve. This is because the optimization of the new action policy of the simulated agent, which is required whenever the model parameters change, is computationally impractical. In this study, we introduce a network modulation technique that can obtain a generalized policy that immediately adapts to the given model parameters. Further, we demonstrate that the proposed technique improves the efficiency of user simulator-based inference by eliminating the need to obtain an action policy for novel model parameters. We validated our approach using the latest user simulator for point-and-click behavior. Consequently, we succeeded in inferring the user’s cognitive parameters and intrinsic reward settings with less than 1/1000 computational power to those of existing methods.
Year
DOI
Venue
2022
10.1145/3491102.3502023
Conference on Human Factors in Computing Systems
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Hee-Seung Moon100.34
Seungwon Do200.34
Wonjae Kim300.34
Jiwon Seo400.34
Minsuk Chang555.16
Byungjoo Lee600.34