Abstract | ||
---|---|---|
Neural Processes (NPs) families encode distributions over functions to a latent representation, given context data, and decode posterior mean and variance at unknown locations. Since mean and variance are derived from the same latent space, they may fail on out-of-domain tasks where fluctuations in function values amplify the model uncertainty. We present a new member named Neural Processes with Position-Relevant-Only Variances (NP-PROV). NP-PROV hypothesizes that a target point close to a context point has small uncertainty, regardless of the function value at that position. The resulting approach derives mean and variance from a function-value-related space and a position-related-only latent space separately. Our evaluation on synthetic and real-world datasets reveals that NP-PROV can achieve state-of-the-art likelihood while retaining a bounded variance when drifts exist in the function value. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1007/978-3-030-90888-1_11 | WISE |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xuesong Wang | 1 | 37 | 3.61 |
Lina Yao | 2 | 981 | 93.63 |
Xianzhi Wang | 3 | 276 | 40.32 |
Feiping Nie | 4 | 7061 | 309.42 |