Title
Group sparse optimization for learning predictive state representations.
Abstract
Predictive state representations (PSRs) are a commonly used approach for agents to summarize the information from history generated during their interaction with a dynamical environment and the agents may use PSRs to predict the future observation. Existing works have shown the benefits of PSRs for modelling partially observable dynamical systems. One of the key issues in PSRs is to discover a set of tests for representing states, which is called core tests. However, there is no very efficient technique to find the core tests for a large and complex problem in practice. In this paper, we formulate the discovering of the set of core tests as an optimization problem and exploit a group sparsity of the decision-making matrix to solve the problem. Then the PSR parameters can be obtained simultaneously. Hence, the model of the underlying system can be built immediately. The new learning approach doesn’t require the specification of the number of core tests. Furthermore, the embedded optimization method for solving the considered group Lasso problem, called alternating direction method of multipliers (ADMM), can achieve a global convergence. We conduct experiments on three problem domains including one extremely large problem domain and show promising performances of the new approach.
Year
DOI
Venue
2017
10.1016/j.ins.2017.05.023
Information Sciences
Keywords
Field
DocType
Predictive state representations,Group sparse,Alternating direction method of multipliers
Convergence (routing),Mathematical optimization,Observable,Problem domain,Group lasso,Matrix (mathematics),Computer science,Exploit,Dynamical systems theory,Artificial intelligence,Optimization problem,Machine learning
Journal
Volume
ISSN
Citations 
412
0020-0255
0
PageRank 
References 
Authors
0.34
16
5
Name
Order
Citations
PageRank
Yifeng Zeng1112.34
Biyang Ma202.70
Bilian Chen3486.73
Jing Tang416316.75
Mengda He5103.59