Title
Technical Report: Image Captioning with Semantically Similar Images
Abstract
This report presents our submission to the MS COCO Captioning Challenge 2015. The method uses Convolutional Neural Network activations as an embedding to find semantically similar images. From these images, the most typical caption is selected based on unigram frequencies. Although the method received low scores with automated evaluation metrics and in human assessed average correctness, it is competitive in the ratio of captions which pass the Turing test and which are assessed as better or equal to human captions.
Year
Venue
Field
2015
CoRR
Closed captioning,Embedding,Convolutional neural network,Turing test,Computer science,Correctness,Speech recognition,Technical report
DocType
Volume
Citations 
Journal
abs/1506.03995
1
PageRank 
References 
Authors
0.41
4
3
Name
Order
Citations
PageRank
martin kolař110.41
Michal Hradis213214.19
Pavel Zemcik3667.58