Abstract | ||
---|---|---|
Image captioning, aiming at generating natural sentences to describe image contents, has received significant attention with remarkable improvements in recent advances. The problem nevertheless is not trivial for cross-modal training due to the two challenges: 1) image detectors often consider only salient areas in an image and seldom explore the rich background context; 2) the language model is highly vulnerable to small but intentional perturbation attacks. To alleviate these issues, we propose the Noise Augmented Double-stream Graph Convolutional Networks (NADGCN) that novelly exploits the additional background context and enhances the generalization of the language model. Technically, NADGCN capitalizes on grid-stream GCN as a supplementary to the region stream, following the recipe that a rescaled grid graph can encode the relationship across grid areas over the full image rather than salient areas only. Moreover, we devise a noise module and integrate into the double-stream GCN to augment the capability of the basic generator. Such noise module introduces adaptive noise into the Recurrent Neural Networks (RNN) and is learnt through regarding the module as an agent with a stochastic Gaussian policy in Reinforcement Learning (RL). Extensive experiments on MSCOCO validate the design of the grid-stream GCN and the noise agent, and our generator outperforms the comparative baselines clearly. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/TCSVT.2020.3036860 | IEEE Transactions on Circuits and Systems for Video Technology |
Keywords | DocType | Volume |
Captioning,graph convolutional networks,adaptive noise | Journal | 31 |
Issue | ISSN | Citations |
8 | 1051-8215 | 7 |
PageRank | References | Authors |
0.46 | 11 | 5 |