Title
Evaluating the Representational Hub of Language and Vision Models.
Abstract
The multimodal models used in the emerging field at the intersection of computational linguistics and computer vision implement the bottom-up processing of the `Hub and Spokeu0027 architecture proposed in cognitive science to represent how the brain processes and combines multi-sensory inputs. In particular, the Hub is implemented as a neural network encoder. We investigate the effect on this encoder of various vision-and-language tasks proposed in the literature: visual question answering, visual reference resolution, and visually grounded dialogue. To measure the quality of the representations learned by the encoder, we use two kinds of analyses. First, we evaluate the encoder pre-trained on the different vision-and-language tasks on an existing diagnostic task designed to assess multimodal semantic understanding. Second, we carry out a battery of analyses aimed at studying how the encoder merges and exploits the two modalities.
Year
Venue
Field
2019
arXiv: Computation and Language
Cognitive science,Computer science
DocType
Volume
Citations 
Journal
abs/1904.06038
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Ravi Shekhar1343.82
Ece Takmaz200.68
Raquel Fernández314727.76
Raffaella Bernardi438038.05