Abstract | ||
---|---|---|
In this paper, we point out a fundamental property of the objective in reinforcement learning, with which we can reformulate the policy gradient objective into a perceptron-like loss function, removing the need to distinguish between on and off policy training. Namely, we posit that it is sufficient to only update a policy $pi$ for cases that satisfy the condition $A(frac{pi}{mu}-1)leq0$, where $A$ is the advantage, and $mu$ is another policy. Furthermore, we show via theoretic derivation that a perceptron-like loss function matches the clipped surrogate objective for PPO. With our new formulation, the policies $pi$ and $mu$ can be arbitrarily apart in theory, effectively enabling off-policy training. To examine our derivations, we can combine the on-policy PPO clipped surrogate (which we show to be equivalent with one instance of the new reformation) with the off-policy IMPALA method. We first verify the combined method on the OpenAI Gym pendulum toy problem. Next, we use our method to train a quadrotor position controller in a simulator. Our trained policy is efficient and lightweight enough to perform in a low cost micro-controller at a minimum update rate of 500 Hz. For the quadrotor, we show two experiments to verify our method and demonstrate performance: 1) hovering at a fixed position, and 2) tracking along a specific trajectory. In preliminary trials, we are also able to apply the method to a real-world quadrotor. |
Year | Venue | DocType |
---|---|---|
2019 | arXiv: Learning | Journal |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kai-Chun Hu | 1 | 0 | 0.34 |
Chen-Huan Pi | 2 | 0 | 0.34 |
Ting Han Wei | 3 | 0 | 0.34 |
I-Chen Wu | 4 | 208 | 55.03 |
Stone Cheng | 5 | 4 | 2.96 |
Yi-Wei Dai | 6 | 0 | 0.34 |
Wei-Yuan Ye | 7 | 0 | 0.34 |