Abstract | ||
---|---|---|
This paper intends to address an issue in RL that when agents possessing varying capabilities, most resources may be acquired by stronger agents, leaving the weaker ones "starving". We introduce a simple method to train non-greedy agents in multi-agent reinforcement learning scenarios with nearly no extra cost. Our model can achieve the following goals in designing the non-greedy agent: non-homogeneous equality, only need local information, cost-effective, generalizable and configurable. We propose the idea of diminishing reward that makes the agent feel less satisfied for consecutive rewards obtained. This idea allows the agents to behave less greedy without the need to explicitly coding any ethical pattern nor monitor other agents' status. Given our framework, resources can be distributed more equally without running the risk of reaching homogeneous equality. We designed two games, Gathering Game and Hunter Prey to evaluate the quality of the model. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1145/3278721.3278759 | PROCEEDINGS OF THE 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY (AIES'18) |
Keywords | Field | DocType |
multi-agent reinforcement learning, reward shaping, non-greedy | Computer science,Homogeneous,Coding (social sciences),Artificial intelligence,Machine learning,Reinforcement learning | Conference |
Citations | PageRank | References |
2 | 0.37 | 14 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Fan-Yun Sun | 1 | 6 | 2.13 |
Yen-Yu Chang | 2 | 2 | 0.71 |
Yueh-Hua Wu | 3 | 8 | 2.87 |
Shou-De Lin | 4 | 706 | 84.81 |