Invariant Information Bottleneck for Domain Generalization. | 0 | 0.34 | 2022 |
PreTraM: Self-supervised Pre-training via Connecting Trajectory and Map. | 0 | 0.34 | 2022 |
Learning Invariant Representations and Risks for Semi-Supervised Domain Adaptation. | 0 | 0.34 | 2021 |
HAWQ-V3: Dyadic Neural Network Quantization | 0 | 0.34 | 2021 |
You Only Group Once: Efficient Point-Cloud Processing with Token Representation and Relation Inference Module | 0 | 0.34 | 2021 |
CoDeNet: Efficient Deployment of Input-Adaptive Object Detection on Embedded FPGAs | 5 | 0.45 | 2021 |
SelfAugment: Automatic Augmentation Policies for Self-Supervised Learning | 0 | 0.34 | 2021 |
HAWQ-V2 - Hessian Aware trace-Weighted Quantization of Neural Networks. | 0 | 0.34 | 2020 |
Q-Bert: Hessian Based Ultra Low Precision Quantization Of Bert | 1 | 0.34 | 2020 |
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes | 1 | 0.35 | 2020 |
ANODEV2: A Coupled Neural ODE Framework. | 0 | 0.34 | 2019 |
Large-Batch Training for LSTM and Beyond. | 10 | 0.63 | 2019 |
Fast Deep Neural Network Training on Distributed Systems and Cloud TPUs | 1 | 0.39 | 2019 |