Name
Affiliation
Papers
S. NIEKUM
Dept. of Comput. Sci., Univ. of Massachusetts, Amherst, MA, USA
58
Collaborators
Citations 
PageRank 
94
165
23.73
Referers 
Referees 
References 
432
526
299
Search Limit
100526
Title
Citations
PageRank
Year
Understanding Acoustic Patterns of Human Teachers Demonstrating Manipulation Tasks to Robots.00.342022
SOPE: Spectrum of Off-Policy Estimators.00.342021
SCAPE - Learning Stiffness Control from Augmented Position Control Experiences.00.342021
Value Alignment Verification00.342021
Universal Off-Policy Evaluation.00.342021
Efficiently Guiding Imitation Learning Agents with Human Gaze.00.342021
Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine Learning.00.342021
Distributional Depth-Based Estimation of Object Articulation Models.00.342021
Demonstration Of The Empathic Framework For Task Learning From Implicit Human Feedback00.342021
A Review Of Robot Learning For Manipulation: Challenges, Representations, And Algorithms00.342021
ScrewNet: Category-Independent Articulation Model Estimation From Depth Images Using Screw Theory00.342021
Adversarial Intrinsic Motivation for Reinforcement Learning.00.342021
You Only Evaluate Once - a Simple Baseline Algorithm for Offline RL.00.342021
Importance Sampling In Reinforcement Learning With An Estimated Behavior Policy00.342021
Self-Supervised Online Reward Shaping in Sparse-Reward Environments00.342021
Bayesian Robust Optimization for Imitation Learning00.342020
Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences00.342020
PixL2R - Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards.00.342020
Hypothesis-Driven Skill Discovery for Hierarchical Deep Reinforcement Learning.00.342020
Human Gaze Assisted Artificial Intelligence - A Review.00.342020
Learning Hybrid Object Kinematics for Efficient Hierarchical Planning Under Uncertainty00.342020
The EMPATHIC Framework for Task Learning from Implicit Human Feedback.00.342020
Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations.00.342019
Learning from Corrective Demonstrations.00.342019
Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations.10.352019
Uncertainty-Aware Data Aggregation for Deep Imitation Learning20.362019
Using Natural Language for Reward Shaping in Reinforcement Learning.10.362019
Enhancing Robot Learning with Human Social Cues.00.342019
Understanding Teacher Gaze Patterns for Robot Learning00.342019
Risk-Aware Active Inverse Reinforcement Learning.20.372019
Efficient Probabilistic Performance Bounds for Inverse Reinforcement Learning20.382018
Asking for Help Effectively via Modeling of Human Beliefs.00.342018
Efficient Hierarchical Robot Motion Planning Under Uncertainty and Hybrid Dynamics.00.342018
Learning Multi-Step Robotic Tasks from Observation.00.342018
Towards Online Learning from Corrective Demonstrations.00.342018
Safe Reinforcement Learning via Shielding50.452018
LAAIR: A Layered Architecture for Autonomous Interactive Robots.00.342018
Importance Sampling Policy Evaluation with an Estimated Behavior Policy.20.372018
Human Gaze Following For Human-Robot Interaction10.352018
Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications10.362018
Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation.30.472017
Toward Probabilistic Safety Bounds for Robot Learning from Demonstration.20.382017
Data-Efficient Policy Evaluation Through Behavior Policy Search.00.342017
High Confidence Off-Policy Evaluation with Models.00.342016
On the Analysis of Complex Backup Strategies in Monte Carlo Tree Search.40.402016
Policy Evaluation Using the Ω-Return10.392015
Learning grounded finite-state representations from unstructured demonstrations301.062015
Active articulation model estimation through interactive perception120.582015
Online Bayesian changepoint detection for articulated motion models50.472015
Learning pouring skills from demonstration and practice00.342014
  • 1
  • 2