Name
Papers
Collaborators
YUNHE WANG
61
100
Citations 
PageRank 
Referers 
113
22.76
325
Referees 
References 
890
370
Search Limit
100890
Title
Citations
PageRank
Year
A Survey on Vision Transformer40.612023
Source-Free Domain Adaptation via Distribution Estimation00.342022
Multimodal Token Fusion for Vision Transformers.00.342022
Brain-inspired Multilayer Perceptron with Spiking Neurons10.382022
AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets.00.342022
Neural architecture tuning with policy adaptation00.342022
Searching for Energy-Efficient Hybrid Adder-Convolution Neural Networks.00.342022
Network Amplification with Efficient MACs Allocation00.342022
MTP: Multi-Task Pruning for Efficient Semantic Segmentation Networks00.342022
MaskGroup: Hierarchical Point Grouping and Masking for 3D Instance Segmentation00.342022
Spatial-Channel Token Distillation for Vision MLPs.00.342022
Federated Learning with Positive and Unlabeled Data.00.342022
Instance-Aware Dynamic Neural Network Quantization.00.342022
Learning Versatile Convolution Filters for Efficient Visual Recognition10.352022
GhostNets on Heterogeneous Devices via Cheap Operations00.342022
Improved Artificial Bee Colony Algorithm for Multimodal Optimization Based on Crowding Method.00.342022
Towards Stable and Robust AdderNets.00.342021
An Empirical Study of Adder Neural Networks for Object Detection.00.342021
Winograd Algorithm For Addernet00.342021
Multi-bit Adaptive Distillation for Binary Neural Networks.00.342021
ReNAS: Relativistic Evaluation of Neural Architecture Search00.342021
Handling Long-tailed Feature Distribution in AdderNets.00.342021
One-Shot Graph Neural Architecture Search With Dynamic Search Space00.342021
Adversarial Robustness Through Disentangled Representations00.342021
Learning Student Networks in the Wild00.342021
Frequency Domain Compact 3d Convolutional Neural Networks10.342020
Neural Architecture Search in a Proxy Validation Loss Landscape00.342020
Cars: Continuous Evolution For Efficient Neural Architecture Search80.502020
Ghostnet: More Features From Cheap Operations50.432020
Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts10.342020
Searching for Low-Bit Weights in Quantized Neural Networks00.342020
Addernet: Do We Really Need Multiplications In Deep Learning?30.382020
On Positive-Unlabeled Classification in GAN00.342020
SCOP: Scientific Control for Reliable Neural Network Pruning00.342020
Efficient Residual Dense Block Search For Image Super-Resolution10.362020
Distilling Portable Generative Adversarial Networks For Image Translation00.342020
Beyond Dropout: Feature Map Distortion To Regularize Deep Neural Networks00.342020
Kernel Based Progressive Distillation for Adder Neural Networks00.342020
A Semi-Supervised Assessor Of Neural Architectures10.372020
Hit-Detector: Hierarchical Trinity Architecture Search for Object Detection40.402020
DropNAS: Grouped Operation Dropout for Differentiable Architecture Search20.432020
Adapting Neural Architectures Between Domains00.342020
Co-Evolutionary Compression For Unpaired Image Translation30.432019
LegoNet: Efficient Convolutional Neural Networks with Lego Filters40.392019
Packing Convolutional Neural Networks in the Frequency Domain.50.422019
Positive-Unlabeled Compression on the Cloud20.362019
Data-Free Learning Of Student Networks50.642019
Crafting Efficient Neural Graph of Large Entropy.00.342019
Low-Resolution Visual Recognition Via Deep Feature Distillation10.342019
Learning Versatile Filters for Efficient Convolutional Neural Networks.30.382018
  • 1
  • 2