Title
VPU: A Video-Based Point Cloud Upsampling Framework
Abstract
In this work, we propose a new patch-based framework called VPU for the video-based point cloud upsampling task by effectively exploiting temporal dependency among multiple consecutive point cloud frames, in which each frame consists of a set of unordered, sparse and irregular 3D points. Rather than adopting the sophisticated motion estimation strategy in video analysis, we propose a new spatio-temporal aggregation (STA) module to effectively extract, align and aggregate rich local geometric clues from consecutive frames at the feature level. By more reliably summarizing spatio-temporally consistent and complementary knowledge from multiple frames in the resultant local structural features, our method better infers the local geometry distributions at the current frame. In addition, our STA module can be readily incorporated with various existing single frame-based point upsampling methods (e.g., PU-Net, MPU, PU-GAN and PU-GCN). Comprehensive experiments on multiple point cloud sequence datasets demonstrate our video-based point cloud upsampling framework achieves substantial performance improvement over its single frame-based counterparts.
Year
DOI
Venue
2022
10.1109/TIP.2022.3166627
IEEE TRANSACTIONS ON IMAGE PROCESSING
Keywords
DocType
Volume
Point cloud compression, Three-dimensional displays, Feature extraction, Task analysis, Graphics processing units, Image reconstruction, Cloud computing, Point cloud sequence, point cloud upsampling, spatial-temporal aggregation
Journal
31
ISSN
Citations 
PageRank 
1057-7149
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Kaisiyuan Wang100.34
Lu Sheng200.34
Shuhang Gu370128.25
Dong Xu47616291.96