Abstract | ||
---|---|---|
We propose a novel extension of the encoder-decoder framework, called a review network. The review network is generic and can enhance any existing encoder- decoder model: in this paper, we consider RNN decoders with both CNN and RNN encoders. The review network performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a thought vector after each review step; the thought vectors are used as the input of the attention mechanism in the decoder. We show that conventional encoder-decoders are a special case of our framework. Empirically, we show that our framework improves over state-of- the-art encoder-decoder systems on the tasks of image captioning and source code captioning. |
Year | Venue | DocType |
---|---|---|
2016 | NIPS | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhilin Yang | 1 | 453 | 22.28 |
Ye Yuan | 2 | 7 | 2.84 |
Yuexin Wu | 3 | 99 | 5.78 |
Cohen, William W. | 4 | 0 | 0.34 |
Ruslan Salakhutdinov | 5 | 12190 | 764.15 |
Salakhutdinov, Russ R. | 6 | 0 | 0.34 |