Abstract | ||
---|---|---|
Deep generative priors offer powerful models for complex-structured data, such as images, audio, and text. Using these priors in inverse problems typically requires estimating the input and/or hidden signals in a multi-layer deep neural network from observation of its output. While these approaches have been successful in practice, rigorous performance analysis is complicated by the non-convex nature of the underlying optimization problems. This paper presents a novel algorithm, Multi-Layer Vector Approximate Message Passing (ML-VAMP), for inference in multi-layer stochastic neural networks. ML-VAMP can be configured to compute maximum a priori (MAP) or approximate minimum mean-squared error (MMSE) estimates for these networks. We show that the performance of ML-VAMP can be exactly predicted in a certain high-dimensional random limit. Furthermore, under certain conditions, ML-VAMP yields estimates that achieve the minimum (i.e., Bayes-optimal) MSE as predicted by the replica method. In this way, ML-VAMP provides a computationally efficient method for multi-layer inference with an exact performance characterization and testable conditions for optimality in the large-system limit. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/JSAIT.2020.2986321 | IEEE Journal on Selected Areas in Information Theory |
Keywords | DocType | Volume |
Analyzing deep neural networks,inverse problems,vector approximate message passing,stochastic neural networks,state evolution | Journal | 1 |
Issue | Citations | PageRank |
1 | 1 | 0.36 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Pandit Parthe | 1 | 1 | 0.70 |
Mojtaba Sahraee-Ardakan | 2 | 8 | 2.61 |
Sundeep Rangan | 3 | 3101 | 163.90 |
Philip Schniter | 4 | 1620 | 93.74 |
Alyson K. Fletcher | 5 | 552 | 41.10 |