Name
Papers
Collaborators
QUOC V. LE
152
398
Citations 
PageRank 
Referers 
8501
366.59
18024
Referees 
References 
2392
1528
Search Limit
1001000
Title
Citations
PageRank
Year
BigSSL: Exploring the Frontier of Large-Scale Semi-Supervised Learning for Automatic Speech Recognition20.372022
DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection.00.342022
STraTA - Self-Training with Task Augmentation for Better Few-shot Learning.00.342021
CoAtNet: Marrying Convolution and Attention for All Data Sizes.00.342021
Searching for Fast Model Families on Datacenter Accelerators00.342021
Meta Pseudo Labels00.342021
Evolving Reinforcement Learning Algorithms.00.342021
EfficientNetV2: Smaller Models and Faster Training00.342021
Pay Attention to MLPs.00.342021
Towards Domain-Agnostic Contrastive Learning00.342021
Adversarial Examples Improve Image Recognition50.482020
AutoML-Zero: Evolving Machine Learning Algorithms From Scratch00.342020
PyGlove: Symbolic Programming for Automated Machine Learning00.342020
SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization50.422020
Can Weight Sharing Outperform Random Architecture Search? An Investigation With TuNAS40.402020
Go Wide, Then Narrow: Efficient Training of Deep Thin Networks00.342020
Randaugment: Practical automated data augmentation with a reduced search space140.682020
Pre-Training Transformers as Energy-Based Cloze Models.00.342020
Unsupervised data augmentation for consistency training10.352020
Rethinking Pre-training and Self-training00.342020
MnasFPN: Learning Latency-aware Pyramid Architecture for Object Detection on Mobile Devices00.342020
Evolving Normalization-Activation Layers00.342020
Self-Training With Noisy Student Improves ImageNet Classification220.742020
Learning Data Augmentation Strategies for Object Detection.60.532020
Neural Input Search for Large Scale Recommendation Models80.472020
EfficientDet: Scalable and Efficient Object Detection50.422020
XLNet: Generalized Autoregressive Pretraining for Language Understanding.280.722019
Mixtape: Breaking the Softmax Bottleneck Efficiently00.342019
High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks00.342019
MixConv - Mixed Depthwise Convolutional Kernels.20.372019
Saccader: Improving Accuracy of Hard Attention Models for Vision00.342019
The Evolved Transformer.00.342019
Searching For Mobilenetv3421.212019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.721.302019
Unsupervised Data Augmentation.00.342019
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism70.422019
BAM! Born-Again Multi-Task Networks for Natural Language Understanding20.362019
Selfie: Self-supervised Pretraining for Image Embedding.00.342019
The Evolved Transformer.00.342019
The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study.00.342019
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition.90.492019
Natural Questions: a Benchmark for Question Answering Research.00.342019
CondConv: Conditionally Parameterized Convolutions for Efficient Inference10.352019
Neural Program Synthesis with Priority Queue Training.70.432018
Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?40.482018
Learning Longer-term Dependencies in RNNs with Auxiliary Losses.120.512018
Faster Discovery of Neural Architectures by Searching for Paths in a Large Model60.502018
AirDialogue: An Environment for Goal-Oriented Dialogue Research.30.372018
QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension.531.122018
A Hierarchical Model for Device Placement80.452018
  • 1
  • 2