ACT-Thor: A Controlled Benchmark for Embodied Action Understanding in Simulated Environments. | 0 | 0.34 | 2022 |
Looking for Confirmations - An Effective and Human-Like Visual Dialogue Strategy. | 0 | 0.34 | 2021 |
Linguistic Issues Behind Visual Question Answering | 0 | 0.34 | 2021 |
Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study | 0 | 0.34 | 2021 |
The Interplay of Task Success and Dialogue Quality: An in-depth Evaluation in Task-Oriented Visual Dialogues | 0 | 0.34 | 2021 |
"I've Seen Things You People Wouldn't Believe" - Hallucinating Entities in GuessWhat?! | 0 | 0.34 | 2021 |
Grounded and Ungrounded Referring Expressions in Human Dialogues - Language Mirrors Different Grounding Conditions. | 0 | 0.34 | 2020 |
Overprotective Training Environments Fall Short at Testing Time - Let Models Contribute to Their Own Training. | 0 | 0.34 | 2020 |
Be Different to Be Better! A Benchmark to Leverage the Complementarity of Language and Vision. | 0 | 0.34 | 2020 |
Grounding Dialogue History - Strengths and Weaknesses of Pre-trained Transformers. | 0 | 0.34 | 2020 |
Which Turn do Neural Models Exploit the Most to Solve GuessWhat? Diving into the Dialogue History Encoding in Transformers and LSTMs. | 0 | 0.34 | 2020 |
Jointly Learning to See, Ask, Decide when to Stop, and then GuessWhat. | 0 | 0.34 | 2019 |
Representation of sentence meaning (A JNLE Special Issue). | 0 | 0.34 | 2019 |
Psycholinguistics meets Continual Learning: Measuring Catastrophic Forgetting in Visual Question Answering. | 0 | 0.34 | 2019 |
Beyond task success: A closer look at jointly learning to see, ask, and GuessWhat | 0 | 0.34 | 2019 |
Evaluating the Representational Hub of Language and Vision Models. | 0 | 0.34 | 2019 |
Measuring Catastrophic Forgetting in Visual Question Answering. | 0 | 0.34 | 2019 |
A Distributional Study of Negated Adjectives and Antonyms. | 0 | 0.34 | 2018 |
Grounded Textual Entailment. | 0 | 0.34 | 2018 |
Some Of Them Can Be Guessed! Exploring The Effect Of Linguistic Context In Predicting Quantifiers | 0 | 0.34 | 2018 |
Comparatives, Quantifiers, Proportions: a Multi-Task Model for the Learning of Quantities from Vision. | 1 | 0.36 | 2018 |
Ask No More: Deciding when to guess in referential visual dialogue. | 0 | 0.34 | 2018 |
Jointly Learning to See, Ask, and GuessWhat. | 0 | 0.34 | 2018 |
Can You See the (Linguistic) Difference? Exploring Mass/Count Distinction in Vision. | 0 | 0.34 | 2017 |
Be Precise or Fuzzy: Learning the Meaning of Cardinals and Quantifiers from Vision. | 0 | 0.34 | 2017 |
Foil It! Find One Mismatch Between Image And Language Caption | 1 | 0.40 | 2017 |
Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures (Extended Abstract). | 1 | 0.38 | 2017 |
Vision and Language Integration: Moving beyond Objects. | 0 | 0.34 | 2017 |
Be Precise or Fuzzy: Learning the Meaning of Cardinals and Quantifiers from Vision. | 0 | 0.34 | 2017 |
Pay Attention to Those Sets! Learning Quantification from Images. | 0 | 0.34 | 2017 |
The Lambada Dataset: Word Prediction Requiring A Broad Discourse Context | 19 | 0.99 | 2016 |
There Is No Logical Negation Here, But There Are Alternatives: Modeling Conversational Negation with Distributional Semantics. | 0 | 0.34 | 2016 |
Building a Bagpipe with a Bag and a Pipe: Exploring Conceptual Combination in Vision. | 2 | 0.36 | 2016 |
Imparare a Quantificare Guardando (Learning to Quantify by Watching). | 0 | 0.34 | 2016 |
SICK through the SemEval glasses. Lesson learned from the evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. | 6 | 0.51 | 2016 |
"Look, some Green Circles!": Learning to Quantify from Images. | 6 | 0.56 | 2016 |
Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures. | 59 | 1.70 | 2016 |
Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation | 1 | 0.40 | 2015 |
Distributional Semantics in Use | 0 | 0.34 | 2015 |
A SICK cure for the evaluation of compositional distributional semantic models. | 90 | 3.72 | 2014 |
Distributional Semantics: A Montagovian View. | 0 | 0.34 | 2014 |
SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment | 86 | 3.92 | 2014 |
Coloring Objects: Adjective-Noun Visual Semantic Compositionality | 3 | 0.42 | 2014 |
TUHOI: Trento Universal Human Object Interaction Dataset | 0 | 0.34 | 2014 |
Exploiting language models to recognize unseen actions | 10 | 0.51 | 2013 |
CCG Categories for Distributional Semantic Models. | 0 | 0.34 | 2013 |
Exploiting Language Models for Visual Recognition. | 4 | 0.48 | 2013 |
A relatedness benchmark to test the role of determiners in compositional distributional semantics. | 8 | 0.61 | 2013 |
Entailment above the word level in distributional semantics | 17 | 0.98 | 2012 |
Continuation semantics for the Lambek--Grishin calculus | 8 | 0.68 | 2010 |