Name
Papers
Collaborators
LILLIAN J. RATLIFF
58
54
Citations 
PageRank 
Referers 
87
23.32
157
Referees 
References 
553
287
Search Limit
100553
Title
Citations
PageRank
Year
Variable demand and multi-commodity flow in Markovian network equilibrium10.362022
Stackelberg Actor-Critic: Game-Theoretic Reinforcement Learning Algorithms.00.342022
Zeroth-Order Methods for Convex-Concave Min-max Problems: Applications to Decision-Dependent Risk Minimization00.342022
Adaptive Incentive Design00.342021
Safe Reinforcement Learning of Control-Affine Systems with Vertex Networks00.342021
A SUPER* Algorithm to Optimize Paper Bidding in Peer Review.00.342020
Gaussian Mixture Models for Parking Demand Data10.352020
Modeling Curbside Parking as a Network of Finite Capacity Queues10.392020
Uncertainty in Multicommodity Routing Networks: When Does It Help?30.412020
Competitive Statistical Estimation With Strategic Data Sources00.342020
Inverse Risk-Sensitive Reinforcement Learning20.382020
Stability of Gradient Learning Dynamics in Continuous Games: Scalar Action Spaces00.342020
Constrained Upper Confidence Reinforcement Learning00.342020
On Gradient-Based Learning In Continuous Games00.342020
Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games00.342020
Convergence Analysis of Gradient-Based Learning in Continuous Games.00.342019
Competitive Statistical Estimation with Strategic Data Sources.00.342019
Convergence Analysis of Gradient-Based Learning with Non-Uniform Learning Rates in Non-Cooperative Multi-Agent Settings.00.342019
Tolling for Constraint Satisfaction in Markov Decision Process Congestion Games.00.342019
Convergence of Learning Dynamics in Stackelberg Games.00.342019
Mobilytics-Gym: A Simulation Framework for Analyzing Urban Mobility Decision Strategies00.342019
Mobilytics- An Extensible, Modular and Resilient Mobility Platform00.342018
Quantifying the Utility-Privacy Tradeoff in the Internet of Things.10.352018
Combinatorial Bandits For Incentivizing Agents With Dynamic Preferences00.342018
Towards a Socially Optimal Multi-Modal Routing Platform.00.342018
Data Driven Spatio-Temporal Modeling Of Parking Demand00.342018
A Robust Utility Learning Framework via Inverse Optimization.30.492018
Uncertainty In Multi-Commodity Routing Networks: When Does It Help?10.402018
Incentives in the Dark: Multi-armed Bandits for Evolving Users with Unknown Type.00.342018
On the Convergence of Competitive, Multi-Agent Gradient-Based Learning.00.342018
Risk-Sensitive Inverse Reinforcement Learning via Gradient Methods.00.342017
Optimizing Curbside Parking Resources Subject To Congestion Constraints20.612017
Leveraging Correlations In Utility Learning00.342017
How Much Urban Traffic is Searching for Parking?20.472017
Gradient-Based Inverse Risk-Sensitive Reinforcement Learning10.362017
Statistical Estimation With Strategic Data Sources In Competitive Settings10.452017
Learning Prospect Theory Value Function And Reference Point Of A Sequential Decision Maker20.382017
Understanding The Impact Of Parking On Urban Mobility Via Routing Games On Queue-Flow Networks00.342016
Inverse Modeling Of Non-Cooperative Agents Via Mixture Of Utilities00.342016
To Observe Or Not To Observe: Queuing Game Framework For Urban Parking30.542016
On the Characterization of Local Nash Equilibria in Continuous Games140.682016
REST: a reliable estimation of stopping time algorithm for social game experiments10.382015
Genericity and structural stability of non-degenerate differential Nash equilibria20.432014
Social game for building energy efficiency: Incentive design40.572014
Analysis of the Godunov-Based Hybrid Model for Ramp Metering and Robust Feedback Control Design10.402014
Privacy and customer segmentation in the smart grid10.352014
Quantifying the Utility-Privacy Tradeoff in the Smart Grid.20.372014
Effects of Risk on Privacy Contracts for Demand-Side Management.30.452014
Social Game for Building Energy Efficiency: Utility Learning, Simulation, and Analysis.20.442014
Energy efficiency via incentive design and utility learning10.352014
  • 1
  • 2