Name
Affiliation
Papers
JULIO B. CLEMPNER
Ctr Comp Res, Natl Polytechn Inst, Ave Juan Dios Batiz s n,Edificio CIC,Col Nueva In, Mexico City 07738, DF, Mexico
52
Collaborators
Citations 
PageRank 
21
91
20.11
Referers 
Referees 
References 
53
655
448
Search Limit
100655
Title
Citations
PageRank
Year
Computing the Nash Bargaining Solution for Multiple Players in Discrete-Time Markov Chains Games00.342020
Finding the Strong Nash Equilibrium: Computation, Existence and Characterization for Markov Games00.342020
Repeated Stackelberg Security Games: Learning with Incomplete State Information10.352020
Traffic-signal control reinforcement learning approach for continuous-time Markov games.00.342020
A nucleus for Bayesian Partially Observable Markov Games: Joint observer and mechanism design00.342020
Continuous-time gradient-like descent algorithm for constrained convex unknown functions: Penalty method application.00.342019
A Stackelberg security Markov game based on partial information for strategic decision making against unexpected attacks10.352019
Controller exploitation-exploration reinforcement learning architecture for computing near-optimal policies10.352019
Continuous-time reinforcement learning approach for portfolio management with time penalization10.352019
Continuous-Time Mean Variance Portfolio With Transaction Costs: A Proximal Approach Involving Time Penalization00.342019
Solving traffic queues at controlled-signalized intersections in continuous-time Markov games.00.342019
Observer and control design in partially observable finite Markov chains.00.342019
DC Motor Control based on Robust run-time Optimization Algorithm00.342019
A team formation method based on a Markov chains games approach.00.342019
Robust Extremum Seeking For A Second Order Uncertain Plant Using A Sliding Mode Controller00.342019
Using The Manhattan Distance For Computing The Multiobjective Markov Chains Problem00.342018
On Lyapunov Game Theory Equilibrium: Static And Dynamic Approaches00.342018
Computing multiobjective Markov chains handled by the extraproximal method.10.382018
A Tikhonov regularized penalty function approach for solving polylinear programming problems.20.372018
Continuous-Time Extremum Seeking with Function Measurements Disturbed by Stochastic Noise: A Synchronous Detection Approach20.392018
Measuring the emotional state among interacting agents: A game theory approach using reinforcement learning.10.372018
Solving Stackelberg security Markov games employing the bargaining Nash approach: Convergence analysis.10.352018
Constrained Extremum Seeking with Function Measurements Disturbed by Stochastic Noise20.392018
A Reinforcement Learning Approach For Solving The Mean Variance Customer Portfolio In Partially Observable Models00.342018
A Continuous-Time Markov Stackelberg Security Game Approach For Reasoning About Real Patrol Strategies40.412018
Strategic Manipulation Approach for Solving Negotiated Transfer Pricing Problem.20.382018
Adapting attackers and defenders patrolling strategies: A reinforcement learning approach for Stackelberg security games.30.382018
Classical workflow nets and workflow nets with reset arcs: using Lyapunov stability for soundness verification.00.342017
Fast Terminal Sliding-Mode Control With an Integral Filter Applied to a Van Der Pol Oscillator.20.382017
Multiobjective Markov chains optimization problem with strong Pareto frontier: Principles of decision making.30.402017
Using the extraproximal method for computing the shortest-path mixed Lyapunov equilibrium in Stackelberg security games.60.532017
A Game Theory Model for Manipulation Based on Machiavellianism: Moral and Ethical Behavior.30.462017
An iterative method for solving stackelberg security games: A Markov games approach10.352017
Negotiating transfer pricing using the Nash bargaining solution30.452017
Conforming coalitions in Markov Stackelberg security games: Setting max cooperative defenders vs. non-cooperative attackers.00.342016
Adapting Strategies To Dynamic Environments In Controllable Stackelberg Security Games00.342016
An optimal strong equilibrium solution for cooperative multi-leader-follower Stackelberg Markov chains games.30.412016
Necessary and sufficient Karush-Kuhn-Tucker conditions for multiobjective Markov chains optimality.50.502016
Modeling Multileader–Follower Noncooperative Stackelberg Games30.432016
Convergence analysis for pure stationary strategies in repeated potential games: Nash, Lyapunov and correlated equilibria10.352016
Solving the Pareto front for multiobjective Markov chains using the minimum Euclidean distance gradient-based optimization method60.512016
Solving the mean-variance customer portfolio in Markov chains using iterated quadratic/Lagrange programming: A credit-card customer limits approach.00.342015
A priori-knowledge/actor-critic reinforcement learning architecture for computing the mean–variance customer portfolio: The case of bank marketing campaigns10.352015
Solving the multi-traffic signal-control problem for a class of continuous-time markov games00.342015
Computing The Stackelberg/Nash Equilibria Using The Extraproximal Method: Convergence Analysis And Implementation Details For Markov Chains Games140.762015
A Stackelberg security game with random strategies based on the extraproximal theoretic approach.90.682015
Setting Cournot Versus Lyapunov Games Stability Conditions And Equilibrium Point Properties20.412015
Computing the Lp-strong nash equilibrium looking for cooperative stability in multiple agents markov games00.342015
Stackelberg security games: Computing the shortest-path equilibrium.50.462015
Computing The Strong Nash Equilibrium For Markov Chains Games20.412015
  • 1
  • 2