Abstract | ||
---|---|---|
Deep learning within the context of point clouds has gained much research interest in recent years mostly due to the promising results that have been achieved on a number of challenging benchmarks, such as 3D shape recognition and scene semantic segmentation. In many realistic settings however, snapshots of the environment are often taken from a single view, which only contains a partial set of the scene due to the field of view restriction of commodity cameras. 3D scene semantic understanding on partial point clouds is considered as a challenging task. In this work, we propose a processing approach for 3D point cloud data based on a multiview representation of the existing 360{deg} point clouds. By fusing the original 360{deg} point clouds and their corresponding 3D multiview representations as input data, a neural network is able to recognize partial point sets while improving the general performance on complete point sets, resulting in an overall increase of 31.9% and 4.3% in segmentation accuracy for partial and complete scene semantic understanding, respectively. This method can also be applied in a wider 3D recognition context such as 3D part segmentation. |
Year | Venue | DocType |
---|---|---|
2018 | arXiv: Computer Vision and Pattern Recognition | Journal |
Volume | Citations | PageRank |
abs/1812.01712 | 0 | 0.34 |
References | Authors | |
8 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ye Zhu | 1 | 3 | 4.12 |
Sven Ewan Shepstone | 2 | 18 | 3.69 |
Pablo Martínez-Nuevo | 3 | 0 | 0.68 |
Miklas S Kristoffersen | 4 | 12 | 3.06 |
Fabien Moutarde | 5 | 54 | 15.26 |
Zhuang Fu | 6 | 0 | 0.34 |