Name
Affiliation
Papers
MATTHIEU GEIST
Supélec, IMS Research Group, Metz, France and MCE Department, ArcelorMittal Research, Maizières-lès-Metz, France
98
Collaborators
Citations 
PageRank 
172
385
44.31
Referers 
Referees 
References 
686
768
796
Search Limit
100768
Title
Citations
PageRank
Year
Lazy-MDPs: Towards Interpretable RL by Learning When to Act.00.342022
Continuous Control with Action Quantization from Demonstrations.00.342022
A general class of surrogate functions for stable and efficient reinforcement learning00.342022
Generalization in Mean Field Games by Learning Master Policies.00.342022
Implicitly Regularized RL with Implicit Q-values00.342022
Scaling Mean Field Games by Online Mirror Descent.00.342022
Offline Reinforcement Learning as Anti-exploration.00.342022
Concave Utility Reinforcement Learning: The Mean-field Game Viewpoint.00.342022
Scalable Deep Reinforcement Learning Algorithms for Mean Field Games.00.342022
Offline Reinforcement Learning With Pseudometric Learning00.342021
Show me the Way: Intrinsic Motivation from Demonstrations00.342021
How To Train Your Heron00.342021
Mean Field Games Flock! The Reinforcement Learning Way.00.342021
Adversarially Guided Actor-Critic00.342021
Self-Imitation Advantage Learning00.342021
What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study00.342021
Hyperparameter Selection for Imitation Learning00.342021
What Matters for Adversarial Imitation Learning?00.342021
Primal Wasserstein Imitation Learning00.342021
Fictitious Play for Mean Field Games: Continuous Time Analysis and Applications00.342020
On The Convergence Of Model Free Learning In Mean Field Games00.342020
Self-Attentional Credit Assignment for Transfer in Reinforcement Learning10.362020
Foolproof Cooperative Learning.00.342020
Image-Based Place Recognition on Bucolic Environment Across Seasons From Semantic Edge Description.00.342020
CopyCAT:: Taking Control of Neural Policies with Constant Attacks00.342020
Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning00.342020
Munchausen Reinforcement Learning00.342020
Deep Conservative Policy Iteration00.342019
Stable and Efficient Policy Evaluation.10.412019
Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations.00.342019
A Theory of Regularized Markov Decision Processes.00.342019
Learning from a Learner00.342019
Image-Based Text Classification using 2D Convolutional Neural Networks00.342019
Foolproof Cooperative Learning.00.342019
Importance Sampling for Deep System Identification00.342019
A Deep Learning Approach For Privacy Preservation In Assisted Living10.352018
Human Activity Recognition Using Recurrent Neural Networks.100.622018
Image-based Natural Language Understanding Using 2D Convolutional Neural Networks.00.342018
Anderson Acceleration for Reinforcement Learning.00.342018
Deep Representation Learning for Domain Adaptation of Semantic Image Segmentation.00.342018
Reconstruct & Crush Network.00.342017
Is the Bellman residual a bad proxy?00.342017
Bridging the Gap Between Imitation Learning and Inverse Reinforcement Learning.70.512017
Should one minimize the expected Bellman residual or maximize the mean value?00.342016
Difference of Convex Functions Programming Applied to Control with Expert Data.00.342016
Score-based Inverse Reinforcement Learning.00.342016
Softened Approximate Policy Iteration for Markov Games.40.422016
Imitation Learning Applied to Embodied Conversational Agents20.382015
Soft-max boosting00.342015
Inverse reinforcement learning in relational domains40.412015
  • 1
  • 2