Title
Modeling Neural Dynamics During Speech Production Using A State Space Variational Autoencoder
Abstract
Characterizing the neural encoding of behavior remains a challenging task in many research areas due in part to complex and noisy spatiotemporal dynamics of evoked brain activity. An important aspect of modeling these neural encodings involves separation of robust, behaviorally relevant signals from background activity, which often contains signals from irrelevant brain processes and decaying information from previous behavioral events. To achieve this separation, we develop a two-branch State Space Variational AutoEncoder (SSVAE) model to individually describe the instantaneous evoked foreground signals and the context-dependent background signals. We modeled the spontaneous speech-evoked brain dynamics using smoothed Gaussian mixture models. By applying the proposed SSVAE model to track ECoG dynamics in one participant over multiple hours, we find that the model can predict speech-related dynamics more accurately than other latent factor inference algorithms. Our results demonstrate that separately modeling the instantaneous speech-evoked and slow context-dependent brain dynamics can enhance tracking performance, which has important implications for the development of advanced neural encoding and decoding models in various neuroscience sub-disciplines.
Year
DOI
Venue
2019
10.1109/ner.2019.8716931
2019 9TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER)
Field
DocType
Volume
Autoencoder,Pattern recognition,Computer science,Inference,Brain activity and meditation,Artificial intelligence,Decoding methods,Speech production,State space,Mixture model,Machine learning,Encoding (memory)
Journal
abs/1901.04024
ISSN
Citations 
PageRank 
1948-3546
0
0.34
References 
Authors
4
3
Name
Order
Citations
PageRank
Sun Pengfei12715.73
David A. Moses200.34
Edward Chang312.71