Abstract | ||
---|---|---|
In this paper, we present the texture reformer, a fast and universal neural-based framework for interactive texture transfer with user-specified guidance. The challenges lie in three aspects: 1) the diversity of tasks, 2) the simplicity of guidance maps, and 3) the execution efficiency. To address these challenges, our key idea is to use a novel feed-forward multi-view and multi-stage synthesis procedure consisting of I) a global view structure alignment stage, II) a local view texture refinement stage, and III) a holistic effect enhancement stage to synthesize high-quality results with coherent structures and fine texture details in a coarse-to-fine fashion. In addition, we also introduce a novel learning-free view-specific texture reformation (VSTR) operation with a new semantic map guidance strategy to achieve more accurate semantic-guided and structure-preserved texture transfer. The experimental results on a variety of application scenarios demonstrate the effectiveness and superiority of our framework. And compared with the state-of-the-art interactive texture transfer algorithms, it not only achieves higher quality results but, more remarkably, also is 2-5 orders of magnitude faster. |
Year | Venue | Keywords |
---|---|---|
2022 | AAAI Conference on Artificial Intelligence | Computer Vision (CV) |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhizhong Wang | 1 | 3 | 4.12 |
Lei Zhao | 2 | 6 | 3.82 |
Haibo Chen | 3 | 0 | 1.69 |
Ailin Li | 4 | 0 | 4.39 |
Zhiwen Zuo | 5 | 3 | 3.11 |
Wei Xing | 6 | 64 | 16.54 |
Dongming Lu | 7 | 7 | 5.55 |