Abstract | ||
---|---|---|
An assembly task is in many cases just a reverse execution of the corresponding disassembly task. During the assembly, the object being assembled passes consecutively from state to state until completed, and the set of possible movements becomes more and more constrained. Based on the observation that autonomous learning of physically constrained tasks can be advantageous, we use information obtained during learning of disassembly in assembly. For autonomous learning of a disassembly policy we propose to use hierarchical reinforcement learning, where learning is decomposed into a high-level decision-making and underlying lower-level intelligent compliant controller, which exploits the natural motion in a constrained environment. During the reverse execution of disassembly policy, the motion is further optimized by means of an iterative learning controller. The proposed approach was verified on two challenging tasks - a maze learning problem and autonomous learning of inserting a car bulb into the casing. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/Humanoids43949.2019.9035052 | 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids) |
Keywords | DocType | ISSN |
lower-level intelligent compliant controller,maze learning problem,iterative learning controller,high-level decision-making,hierarchical reinforcement learning,disassembly policy,physically constrained tasks,reverse execution,disassembly task,assembly task,autonomous learning | Conference | 2164-0572 |
ISBN | Citations | PageRank |
978-1-5386-7631-8 | 1 | 0.35 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mihael Simonic | 1 | 2 | 2.40 |
Leon Zlajpah | 2 | 100 | 16.58 |
Ales Ude | 3 | 898 | 85.11 |
Bojan Nemec | 4 | 345 | 30.28 |