Name
Affiliation
Papers
YVETTE GRAHAM
Dublin City Univ, Sch Comp, NCLT, Dublin 9, Ireland
45
Collaborators
Citations 
PageRank 
116
197
21.96
Referers 
Referees 
References 
399
665
401
Search Limit
100665
Title
Citations
PageRank
Year
Achieving Reliable Human Assessment of Open-Domain Dialogue Systems00.342022
Memento: A Prototype Lifelog Search Engine for LSC'2100.342021
Improving Unsupervised Question Answering via Summarization-Informed Question Generation.00.342021
Contrasting Human Opinion of Non-factoid Question Answering with Automatic Evaluation00.342020
Assessing Human-Parity in Machine Translation on the Segment Level.00.342020
Findings of the 2020 Conference on Machine Translation (WMT20).00.342020
Improving Document-Level Sentiment Analysis with User and Product Context00.342020
TRECVID 2020: A comprehensive campaign for evaluating video retrieval tasks across multiple application domains00.342020
Incorporating Context and Knowledge for Better Sentiment Analysis of Narrative Text.00.342020
Exploring the Impact of Training Data Bias on Automatic Generation of Video Captions.00.342019
Results of the WMT19 Metrics Shared Task: Segment-Level and Strong MT Systems Pose Big Challenges.00.342019
The Second Multilingual Surface Realisation Shared Task (SR'19) - Overview and Evaluation Results.10.372019
Proceedings of the 2nd Workshop on Multilingual Surface Realisation, MSR@EMNLP-IJCNLP 2019, Hong Kong, China, November 3, 2019.00.342019
Translating Pro-Drop Languages with Reconstruction Models.30.382018
Results of the WMT18 Metrics Shared Task: Both characters and embeddings achieve good performance.00.342018
TRECVID 2018 - Benchmarking Video Activity Detection, Video Captioning and Matching, Video Storytelling Linking and Video Search.00.342018
Evaluation Of Automatic Video Captioning Using Direct Assessment30.382017
Can machine translation systems be evaluated by the crowd alone170.902017
Results of the WMT16 Metrics Shared Task.130.812017
Blend: a Novel Combined MT Metric Based on Direct Assessment - CASICT-DCU submission to WMT17 Metrics Task.00.342017
Further Investigation into Reference Bias in Monolingual Evaluation of Machine Translation.00.342017
Improving Evaluation of Document-level Machine Translation Quality Estimation.00.342017
Achieving Accurate Conclusions in Evaluation of Automatic Machine Translation Metrics.50.442016
Is all that Glitters in Machine Translation Quality Estimation really Gold?00.342016
MaxSD: A Neural Machine Translation Evaluation Metric Optimized by Maximizing Similarity Distance00.342016
Findings of the 2016 Conference on Machine Translation.552.032016
Accurate Evaluation of Segment-level Machine Translation Metrics.241.222015
Re-evaluating Automatic Summarization with BLEU and 192 Shades of ROUGE10.362015
Improving Evaluation Of Machine Translation Quality Estimation100.692015
Testing for Significance of Increased Correlation with Human Judgment.70.652014
Is Machine Translation Getting Better over Time?120.752014
Randomized Significance Tests in Machine Translation50.642014
Crowd-Sourcing of Human Judgments of Machine Translation Fluency00.342013
Umelb: Cross-lingual Textual Entailment with Word Alignment and String Similarity Features10.362013
Continuous Measurement Scales in Human Evaluation of Machine Translation171.022013
A Dependency-Constrained Hierarchical Model with Moses00.342013
Continuous Measurement Scales in Human Evaluation of Machine Translation.00.342013
Measurement of Progress in Machine Translation40.502012
Deep syntax language models and statistical machine translation30.422010
Sulis: An Open Source Transfer Decoder for Deep Syntactic Statistical Machine Translation00.342010
Gap Between Theory and Practice: Noise Sensitive Word Alignment in Machine Translation80.762010
Factor templates for factored machine translation models.20.372010
An Open Source Rule Induction Tool for Transfer-Based SMT30.432009
Guessing the grammatical function of a non-root f-structure in LFG00.342009
Packed Rules for Automatic Transfer-Rule Induction30.392008