Title
Entropic determinants of massive matrices
Abstract
The ability of many powerful machine learning algorithms to deal with large data sets without compromise is often hampered by computationally expensive linear algebra tasks, of which calculating the log determinant is a canonical example. In this paper we demonstrate the optimality of Maximum Entropy methods in approximating such calculations. We prove the equivalence between mean value constraints and sample expectations in the big data limit, that Covariance matrix eigenvalue distributions can be completely defined by moment information and that the reduction of the self entropy of a maximum entropy proposal distribution, achieved by adding more moments reduces the KL divergence between the proposal and true eigenvalue distribution. We empirically verify our results on a variety of SparseSuite matrices and establish best practices.
Year
DOI
Venue
2017
10.1109/BigData.2017.8257915
2017 IEEE International Conference on Big Data (Big Data)
Keywords
Field
DocType
Maximum entropy methods,approximation methods,Matrix Theory,constrained optimization,noisy constraints,log determinants
Applied mathematics,Linear algebra,Data mining,Computer science,Matrix (mathematics),Matrix decomposition,Equivalence (measure theory),Covariance matrix,Principle of maximum entropy,Eigenvalues and eigenvectors,Kullback–Leibler divergence
Conference
ISSN
ISBN
Citations 
2639-1589
978-1-5386-2716-7
2
PageRank 
References 
Authors
0.38
0
2
Name
Order
Citations
PageRank
Diego Granziol162.15
stephen j roberts21244174.70