Name
Affiliation
Papers
TIMOTHY P. LILLICRAP
Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada. tim@biomed.queensu.ca
69
Collaborators
Citations 
PageRank 
256
4377
170.65
Referers 
Referees 
References 
10329
1169
718
Search Limit
1001000
Title
Citations
PageRank
Year
Retrieval-Augmented Reinforcement Learning.00.342022
Equilibrium aggregation: encoding sets via optimization.00.342022
The Brain-Computer Metaphor Debate Is Useless: A Matter of Semantics00.342022
A data-driven approach for learning to control computers.00.342022
The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning.00.342021
Mastering Atari with Discrete World Models00.342021
Towards Biologically Plausible Convolutional Networks.00.342021
A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network00.342020
Meta-Learning Deep Energy-Based Memory Models00.342020
Automated curriculum generation through setter-solver interactions.00.342020
Training Generative Adversarial Networks by Solving Ordinary Differential Equations00.342020
Compressive Transformers for Long-Range Sequence Modelling10.352020
Dream to Control: Learning Behaviors by Latent Imagination00.342020
Noise Contrastive Priors for Functional Uncertainty.10.342019
Recall Traces: Backtracking Models for Efficient Reinforcement Learning00.342019
Composing Entropic Policies using Divergence Correction.00.342019
Is coding a relevant metaphor for building AI? A commentary on "Is coding a relevant metaphor for the brain?", by Romain Brette.00.342019
Experience Replay for Continual Learning00.342019
Deep Learning without Weight Transport.00.342019
Learning to Make Analogies by Contrasting Abstract Relational Structure.00.342019
An investigation of model-free planning.10.352019
Meta-Learning Neural Bloom Filters.10.352019
Deep reinforcement learning with relational inductive biases.50.412019
Deep Compressed Sensing.00.342019
The Kanerva Machine: A Generative Distributed Memory.10.352018
Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures00.342018
Measuring abstract reasoning in neural networks.100.552018
Distributed Distributional Deterministic Policy Gradients.200.702018
Vector-based navigation using grid-like representations in artificial agents.271.262018
Learning Attractor Dynamics for Generative Memory.10.352018
Unsupervised Predictive Memory in a Goal-Directed Agent.130.872018
Fast Parametric Learning with Activation Memorization.70.462018
Optimizing Agent Behavior over Long Time Scales by Transporting Value.30.382018
Relational recurrent neural networks.90.462018
Relational Deep Reinforcement Learning.180.742018
Episodic Curiosity through Reachability.90.462018
Entropic Policy Composition with Generalized Policy Improvement and Divergence Correction.00.342018
Learning Latent Dynamics for Planning from Pixels.90.482018
DeepMind Control Suite.00.342018
Discovering objects and their relations from entangled scene representations.231.002017
Building Machines that Learn and Think for Themselves: Commentary on Lake et al., Behavioral and Brain Sciences, 2017.20.392017
Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.672.102017
Generative Temporal Models with Memory.100.662017
Mastering the game of Go without human knowledge.56219.062017
Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates.842.182017
Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights.60.422017
Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning.201.042017
A simple neural network module for relational reasoning.1293.032017
StarCraft II: A New Challenge for Reinforcement Learning.461.672017
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation.150.752017
  • 1
  • 2