Abstract | ||
---|---|---|
ABSTRACTChinese character inpainting is a challenging task where large missing regions have to be filled with both visually and semantic realistic contents. Existing methods generally produce pseudo or ambiguous characters due to lack of semantic information. Given the key observation that Chinese characters contain visually glyph representation and intrinsic contextual semantics, we tackle the challenge of similar Chinese characters by modeling the underlying regularities among glyph and semantic information. We propose a semantics enhanced generative framework for Chinese character inpainting, where a global semantic supervising module (GSSM) is introduced to constrain contextual semantics. In particular, sentence embedding is used to guide the encoding of continuous contextual characters. The method can not only generate realistic Chinese character, but also explicitly utilize context as reference during network training to eliminate ambiguity. The proposed method is evaluated on both handwritten and printed Chinese characters with various masks. The experiments show that the method successfully predicts missing character information without any mask input, and achieves significant sentence-level results benefiting from global semantic supervising in a wide variety of scenes. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1145/3474085.3475333 | International Multimedia Conference |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Fei Chen Chen | 1 | 9 | 5.85 |
Gang Pan | 2 | 0 | 1.35 |
Di Sun | 3 | 7 | 5.86 |
Zhang Jiawan | 4 | 369 | 46.66 |