Title
Learning To Control Neurons Using Aggregated Measurements
Abstract
Controlling a population of neurons with one or a few control signals is challenging due to the severely underactuated nature of the control system and the inherent nonlinear dynamics of the neurons that are typically unknown. Control strategies that incorporate deep neural networks and machine learning techniques directly use data to learn a sequence of control actions for targeted manipulation of a population of neurons. However, these learning strategies inherently assume that perfect feedback data from each neuron at every sampling instant are available, and do not scale gracefully as the number of neurons in the population increases. As a result, the learning models need to be retrained whenever such a change occurs. In this work, we propose a learning strategy to design a control sequence by using population-level aggregated measurements and incorporate reinforcement learning techniques to find a (bounded, piecewise constant) control policy that fulfills the given control task. We demonstrate the feasibility of the proposed approach using numerical experiments on a finite population of nonlinear dynamical systems and canonical phase models that are widely used in neuroscience.
Year
DOI
Venue
2020
10.23919/ACC45564.2020.9147426
2020 AMERICAN CONTROL CONFERENCE (ACC)
DocType
ISSN
Citations 
Conference
0743-1619
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Yao-Chi Yu100.34
Vignesh Narayanan2105.19
ShiNung Ching33216.02
Shin Li Jr.411219.45