Abstract | ||
---|---|---|
Robust learning from noisy demonstrations is a practical but highly challenging problem in imitation learning. In this paper, we first theoretically show that robust imitation learning can be achieved by optimizing a classification risk with a symmetric loss. Based on this theoretical finding, we then propose a new imitation learning method that optimizes the classification risk by effectively combining pseudo-labeling with co-training. Unlike existing methods, our method does not require additional labels or strict assumptions about noise distributions. Experimental results on continuous-control benchmarks show that our method is more robust compared to state-of-the-art methods. |
Year | Venue | DocType |
---|---|---|
2021 | 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS) | Conference |
Volume | ISSN | Citations |
130 | 2640-3498 | 0 |
PageRank | References | Authors |
0.34 | 0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Voot Tangkaratt | 1 | 46 | 9.37 |
Nontawat Charoenphakdee | 2 | 2 | 4.41 |
Masashi Sugiyama | 3 | 3353 | 264.24 |