Title
Cross Attention Based Style Distribution for Controllable Person Image Synthesis.
Abstract
Controllable person image synthesis task enables a wide range of applications through explicit control over body pose and appearance. In this paper, we propose a cross attention based style distribution module that computes between the source semantic styles and target pose for pose transfer. The module intentionally selects the style represented by each semantic and distributes them according to the target pose. The attention matrix in cross attention expresses the dynamic similarities between the target pose and the source styles for all semantics. Therefore, it can be utilized to route the color and texture from the source image, and is further constrained by the target parsing map to achieve a clearer objective. At the same time, to encode the source appearance accurately, the self attention among different semantic styles is also added. The effectiveness of our model is validated quantitatively and qualitatively on pose transfer and virtual try-on tasks. Codes are available at https://github.com/xyzhouo/CASD.
Year
DOI
Venue
2022
10.1007/978-3-031-19784-0_10
European Conference on Computer Vision
Keywords
DocType
Citations 
Person image synthesis,Pose transfer,Virtual try-on
Conference
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Xinyue Zhou100.34
Mingyu Yin200.68
Xinyuan Chen300.34
Li Sun477.20
Changxin Gao518838.01
Qingli Li686.68