Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation | 0 | 0.34 | 2022 |
Compression of Generative Pre-trained Language Models via Quantization | 0 | 0.34 | 2022 |
FILIP: Fine-grained Interactive Language-Image Pre-Training. | 0 | 0.34 | 2022 |
Improved OOD Generalization via Adversarial Training and Pre-training | 0 | 0.34 | 2021 |
An Intelligent Transaction Migration Scheme for RAFT-Based Private Blockchain in Internet of Things Applications | 3 | 0.37 | 2021 |
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss | 0 | 0.34 | 2021 |
Design and Prototype Implementation of a Blockchain-Enabled LoRa System With Edge Computing | 2 | 0.38 | 2021 |
TernaryBERT: Distillation-aware Ultra-low Bit BERT. | 1 | 0.43 | 2020 |
DynaBERT: Dynamic BERT with Adaptive Width and Depth | 0 | 0.34 | 2020 |
Normalization Helps Training of Quantized LSTM | 0 | 0.34 | 2019 |
Analysis of Quantized Models. | 2 | 0.36 | 2019 |
Power Law in Sparsified Deep Neural Networks. | 0 | 0.34 | 2018 |
Loss-aware Weight Quantization of Deep Networks. | 9 | 0.44 | 2018 |
Efficient Learning of Timeseries Shapelets. | 3 | 0.39 | 2016 |
Loss-aware Binarization of Deep Networks. | 7 | 0.42 | 2016 |
Soft-Defined Heterogeneous Vehicular Network: Architecture and Challenges | 35 | 1.31 | 2015 |