Name
Playground
About
FAQ
GitHub
Playground
Shortest Path Finder
Community Detector
Connected Papers
Author Trending
Ezequiel Roberto Zorzal
Daniel P. Kennedy
Roland Zumkeller
Maximilian Dürr
Dan Graur
Liangliang Shang
Chen Ma
francis m sand
Barbara Aquilani
Maria do Carmo Texeira Pinto
Home
/
Author
/
MEHDI REZAGHOLIZADEH
Author Info
Open Visualization
Name
Affiliation
Papers
MEHDI REZAGHOLIZADEH
Montreal Res Ctr, Huawei Noahs Ark Lab, Montreal, PQ, Canada
26
Collaborators
Citations
PageRank
46
3
8.82
Referers
Referees
References
10
96
25
Publications (26 rows)
Collaborators (46 rows)
Referers (10 rows)
Referees (96 rows)
Title
Citations
PageRank
Year
Kronecker Decomposition for GPT Compression
0
0.34
2022
RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation.
0
0.34
2022
From Fully Trained to Fully Random Embeddings: Improving Neural Machine Translation with Compact Word Embedding Tables.
0
0.34
2022
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation
0
0.34
2022
CILDA: Contrastive Data Augmentation Using Intermediate Layer Knowledge Distillation.
0
0.34
2022
Learning functions on multiple sets using multi-set transformers.
0
0.34
2022
Context-aware Adversarial Training for Name Regularity Bias in Named Entity Recognition
1
0.35
2021
NATURE: Natural Auxiliary Text Utterances for Realistic Spoken Language Evaluation.
0
0.34
2021
End-to-End Self-Debiasing Framework for Robust NLU Training.
0
0.34
2021
Alp-Kd: Attention-Based Layer Projection For Knowledge Distillation
0
0.34
2021
FINE-TUNING OF PRE-TRAINED END-TO-END SPEECH RECOGNITION WITH GENERATIVE ADVERSARIAL NETWORKS
0
0.34
2021
Annealing Knowledge Distillation
0
0.34
2021
How to Select One Among All ? An Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding.
0
0.34
2021
Towards Zero-Shot Knowledge Distillation for Natural Language Processing.
0
0.34
2021
Universal-KD - Attention-based Output-Grounded Intermediate Layer Knowledge Distillation.
0
0.34
2021
Knowledge Distillation with Noisy Labels for Natural Language Understanding.
0
0.34
2021
Not Far Away, Not So Close - Sample Efficient Nearest Neighbour Data Augmentation via MiniMax.
0
0.34
2021
Transformer-Based ASR Incorporating Time-Reduction Layer and Fine-Tuning with Self-Knowledge Distillation.
0
0.34
2021
RW-KD - Sample-wise Loss Terms Re-Weighting for Knowledge Distillation.
0
0.34
2021
Improving Word Embedding Factorization for Compression using Distilled Nonlinear Neural Decomposition.
0
0.34
2020
Fully Quantized Transformer for Machine Translation.
0
0.34
2020
Latent Code and Text-based Generative Adversarial Networks for Soft-text Generation.
0
0.34
2019
TextKD-GAN - Text Generation Using Knowledge Distillation and Generative Adversarial Networks.
2
0.35
2019
Bilingual-GAN: A Step Towards Parallel Text Generation.
0
0.34
2019
TextKD-GAN: Text Generation using KnowledgeDistillation and Generative Adversarial Networks.
0
0.34
2019
SALSA-TEXT : self attentive latent space based adversarial text generation.
0
0.34
2018
1