Name
Papers
Collaborators
YEE WHYE TEH
166
270
Citations 
PageRank 
Referers 
6253
539.26
13684
Referees 
References 
1601
1111
Search Limit
1001000
Title
Citations
PageRank
Year
Generative Models as Distributions of Functions00.342022
Amortized Rejection Sampling in Universal Probabilistic Programming00.342022
Multiplicative Interactions and Where to Find Them00.342020
Bootstrapping Neural Processes00.342020
Bayesian Deep Ensembles via the Neural Tangent Kernel00.342020
How Robust are the Estimated Effects of Nonpharmaceutical Interventions against COVID-19?00.342020
Functional Regularisation for Continual Learning with Gaussian Processes00.342020
Continual Unsupervised Representation Learning.00.342019
Noise Contrastive Meta-Learning For Conditional Density Estimation Using Kernel Mean Embeddings00.342019
Exploiting Hierarchy for Learning and Transfer in KL-regularized RL.20.362019
Probabilistic Symmetries and Invariant Neural Networks50.472019
Hierarchical Representations with Poincaré Variational Auto-Encoders.00.342019
Meta-Learning surrogate models for sequential decision making.00.342019
Random Tessellation Forests.00.342019
Information asymmetry in KL-regularized RL.10.352019
Hybrid Models with Deep and Invertible Features.10.352019
Hijacking Malaria Simulators with Probabilistic Programming.00.342019
Meta-learning of Sequential Strategies.20.362019
Augmented Neural ODEs.00.342019
Task Agnostic Continual Learning via Meta Learning.00.342019
Revisiting Reweighted Wake-Sleep.00.342018
Faithful Inversion of Generative Models for Effective Amortized Inference00.342018
Sampling And Inference For Beta Neutral-To-The-Left Models Of Sparse Networks00.342018
Conditional Neural Processes.00.342018
Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data00.342018
On Exploration, Exploitation and Learning in Adaptive Importance Sampling.10.352018
Neural Processes.00.342018
Stochastic Expectation Maximization with Variance Reduction.20.372018
A Statistical Approach to Assessing Neural Network Robustness.00.342018
Mix & Match Agent Curricula for Reinforcement Learning.20.352018
Hamiltonian Descent Methods.00.342018
Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects.80.452018
Tighter Variational Bounds are Not Necessarily Better.80.522018
Neural probabilistic motor primitives for humanoid control.40.392018
Disentangling Disentanglement.00.342018
Poisson intensity estimation with reproducing kernels.40.502017
Faithful Model Inversion Substantially Improves Auto-encoding Variational Inference.00.342017
Particle Value Functions.00.342017
Gaussian Processes for Survival Analysis.00.342016
Exploration of the (Non-)Asymptotic Bias and Variance of Stochastic Gradient Langevin Dynamics.10.352016
The Mondrian Kernel.10.382016
DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression.10.362016
Consistency and Fluctuations For Stochastic Gradient Langevin Dynamics180.872016
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables.240.592016
Image Retrieval with a Bayesian Model of Relevance Feedback.10.352016
The Mondrian Process for Machine Learning00.342015
A hybrid sampler for Poisson-Kingman mixture models10.402015
Mondrian Forests for Large-Scale Regression when Uncertainty Matters60.502015
On a class of σ-stable Poisson---Kingman models and an effective marginalized sampler00.342015
Asynchronous Anytime Sequential Monte Carlo.130.872014
  • 1
  • 2