Abstract | ||
---|---|---|
Within the realm of service robotics, researchers have placed a great amount of effort into learning, understanding, and representing motions as manipulations for task execution by robots. The task of robot learning and problem-solving is very broad, as it integrates a variety of tasks such as object detection, activity recognition, task/motion planning, localization, knowledge representation and retrieval, and the intertwining of perception/vision and machine learning techniques. In this paper, we solely focus on knowledge representations and notably how knowledge is typically gathered, represented, and reproduced to solve problems as done by researchers in the past decades. In accordance with the definition of knowledge representations, we discuss the key distinction between such representations and useful learning models that have extensively been introduced and studied in recent years, such as machine learning, deep learning, probabilistic modeling, and semantic graphical structures. Along with an overview of such tools, we discuss the problems which have existed in robot learning and how they have been built and used as solutions, technologies or developments (if any) which have contributed to solving them. Finally, we discuss key principles that should be considered when designing an effective knowledge representation. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1016/j.robot.2019.03.005 | Robotics and Autonomous Systems |
Keywords | Field | DocType |
Knowledge representation,Robot learning,Task planning,Domestic robots,Service robotics | Motion planning,Robot learning,Computer vision,Knowledge representation and reasoning,Activity recognition,Computer science,Human–computer interaction,Artificial intelligence,Probabilistic logic,Deep learning,Robot,Robotics | Journal |
Volume | ISSN | Citations |
118 | 0921-8890 | 2 |
PageRank | References | Authors |
0.41 | 0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
David Paulius | 1 | 6 | 2.16 |
Yu Sun | 2 | 208 | 35.82 |