Title
Human motor learning is robust to control-dependent noise
Abstract
Noises are ubiquitous in sensorimotor interactions and contaminate the information provided to the central nervous system (CNS) for motor learning. An interesting question is how the CNS manages motor learning with imprecise information. Integrating ideas from reinforcement learning and adaptive optimal control, this paper develops a novel computational mechanism to explain the robustness of human motor learning to the imprecise information, caused by control-dependent noise that exists inherently in the sensorimotor systems. Starting from an initial admissible control policy, in each learning trial the mechanism collects and uses the noisy sensory data (caused by the control-dependent noise) to form an imprecise evaluation of the performance of the current policy and then constructs an updated policy based on the imprecise evaluation. As the number of learning trials increases, the generated policies mathematically provably converge to a (potentially small) neighborhood of the optimal policy under mild conditions, despite the imprecise information in the learning process. The mechanism directly synthesizes the policies from the sensory data, without identifying an internal forward model. Our preliminary computational results on two classic arm reaching tasks are in line with experimental observations reported in the literature. The model-free control principle proposed in the paper sheds more lights into the inherent robustness of human sensorimotor systems to the imprecise information, especially control-dependent noise, in the CNS.
Year
DOI
Venue
2022
10.1007/s00422-022-00922-z
Biological Cybernetics
Keywords
DocType
Volume
Robustness, Reinforcement learning, Policy iteration, Sensorimotor control, Arm reaching
Journal
116
Issue
ISSN
Citations 
3
1432-0770
0
PageRank 
References 
Authors
0.34
19
3
Name
Order
Citations
PageRank
Bo Pang15795451.00
Leilei Cui200.34
Zhong-Ping Jiang300.68