Title
General Object Pose Transformation Network from Unpaired Data.
Abstract
Object pose transformation is a challenging task. Yet, most existing pose transformation networks only focus on synthesizing humans. These methods either rely on the keypoints information or rely on the manual annotations of the paired target pose images for training. However, collecting such paired data is laboring and the cue of keypoints is inapplicable to general objects. In this paper, we address a problem of novel general object pose transformation from unpaired data. Given a source image of an object that provides appearance information and a desired pose image as reference in the absence of paired examples, we produce a depiction of the object in that specified pose, retaining the appearance of both the object and background. Specifically, to preserve the source information, we propose an adversarial network with \({\textbf {S}}\)patial-\({\textbf {S}}\)tructural (SS) block and \({\textbf {T}}\)exture-\({\textbf {S}}\)tyle-\({\textbf {C}}\)olor (TSC) block after the correlation matching module that facilitates the output to be semantically corresponding to the target pose image while contextually related to the source image. In addition, we can extend our network to complete multi-object and cross-category pose transformation. Extensive experiments demonstrate the effectiveness of our method which can create more realistic images when compared to those of recent approaches in terms of image quality. Moreover, we show the practicality of our method for several applications.
Year
DOI
Venue
2022
10.1007/978-3-031-20068-7_17
European Conference on Computer Vision
Keywords
DocType
Citations 
Pose transformation,Adversarial network,Semantically,Contextually
Conference
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Yukun Su100.34
Guosheng Lin268833.91
Ruizhou Sun301.01
Wu Qingyao425933.46