Abstract | ||
---|---|---|
This paper proposes a new approach for the tuning of fuzzy controllers parameters based on reinforcement learning. The architecture of the proposed approach comprises of a Q estimator network (QEN) and a Takagi-Sugeno type fuzzy inference system (FIS). Unlike the most of the existing fuzzy Q-learning approaches, which select an optimal action based on finite discrete actions, while the proposed controller obtain the control output directly. With the proposed architecture, the learning algorithms for all the parameters of the Q estimator network and the FIS are developed based on the temporal difference methods as well as the gradient descent algorithm. The performance of the proposed design technique is illustrated by simulation studies of a vehicle longitudinal control system. |
Year | DOI | Venue |
---|---|---|
2003 | 10.1109/FUZZ.2003.1209417 | FUZZ-IEEE |
Keywords | Field | DocType |
fuzzy controllers,vehicle longitudinal control system,takagi-sugeno type fuzzy inference system,reinforcement learning,optimal action-value function,gradient descent algorithm,inference mechanisms,learning (artificial intelligence),parameters tuning,q estimator network,fuzzy systems,gradient methods,tuning,fuzzy control,temporal difference methods,optimal control,control systems,learning artificial intelligence,gradient descent,adaptive control,control system,temporal difference | Control theory,Gradient descent,Temporal difference learning,Computer science,Control theory,Fuzzy logic,Artificial intelligence,Control system,Adaptive neuro fuzzy inference system,Fuzzy control system,Machine learning,Reinforcement learning | Conference |
Volume | ISBN | Citations |
1 | 0-7803-7810-5 | 27 |
PageRank | References | Authors |
1.43 | 16 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xiaohui Dai | 1 | 44 | 2.74 |
Chi-Kwong Li | 2 | 313 | 29.81 |
Ahmad B. Rad | 3 | 273 | 30.64 |