Name
Affiliation
Papers
TING YAO
City University of Hong Kong, Hong Kong, China
109
Collaborators
Citations 
PageRank 
134
842
52.62
Referers 
Referees 
References 
2013
1835
1250
Search Limit
1001000
Title
Citations
PageRank
Year
Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training00.342022
Auto-captions on GIF: A Large-scale Video-sentence Dataset for Vision-language Pre-training00.342022
Unpaired Image Captioning With semantic-Constrained Self-Learning00.342022
Responsive Listening Head Generation: A Benchmark Dataset and Baseline.00.342022
Exploring Structure-aware Transformer over Interaction Proposals for Human-Object Interaction Detection00.342022
3D Cascade RCNN: High Quality Object Detection in Point Clouds00.342022
Wave-ViT: Unifying Wavelet and Transformers for Visual Representation Learning.00.342022
Dynamic Temporal Filtering in Video Models.00.342022
Building GC-free Key-value Store on HM-SMR Drives with ZoneFS00.342022
MLP-3D: A MLP-like 3D Architecture with Grouped Time Mixing00.342022
SPE-Net: Boosting Point Cloud Analysis via Rotation Robustness Enhancement.00.342022
Stand-Alone Inter-Frame Attention in Video Models00.342022
X-modaler: A Versatile and High-performance Codebase for Cross-modal Analytics00.342021
Boosting Video Representation Learning with Multi-Faceted Integration00.342021
ComQA: Compositional Question Answering via Hierarchical Graph Neural Networks00.342021
A Style and Semantic Memory Mechanism for Domain Generalization*.00.342021
CoCo-BERT: Improving Video-Language Pre-training with Contrastive Cross-modal Matching and Denoising10.352021
Motion-Focused Contrastive Learning of Video Representations*.00.342021
Optimization Planning for 3D ConvNets00.342021
Condensing a Sequence to One Informative Frame for Video Recognition.00.342021
Transferrable Contrastive Learning for Visual Domain Adaptation00.342021
Scheduled Sampling In Vision-Language Pretraining With Decoupled Encoder-Decoder Network00.342021
Smart Director: An Event-Driven Directing System for Live Broadcasting10.392021
Multi-Lingual Question Generation with Language Agnostic Language Model.00.342021
Single Shot Video Object Detector30.372021
Noise Augmented Double-Stream Graph Convolutional Networks for Image Captioning70.462021
Representing Videos as Discriminative Sub-graphs for Action Recognition00.342021
Seco: Exploring Sequence Supervision For Unsupervised Representation Learning00.342021
Joint Contrastive Learning with Infinite Possibilities00.342020
Learning a Unified Sample Weighting Network for Object Detection00.342020
Exploring Category-Agnostic Clusters for Open-Set Domain Adaptation00.342020
iDirector: An Intelligent Directing System for Live Broadcast10.352020
Exploring Depth Information for Spatial Relation Recognition00.342020
Deep Metric Learning With Density Adaptivity.10.352020
Coarse-to-Fine Localization of Temporal Action Proposals10.352020
MatrixKV - Reducing Write Stalls and Write Amplification in LSM-tree Based KV Stores with Matrix Container in NVM.00.342020
Neural Question Generation with Answer Pivot.00.342020
Transferring and Regularizing Prediction for Semantic Segmentation00.342020
X-Linear Attention Networks for Image Captioning120.542020
Convolutional Auto-encoding of Sentence Topics for Image Paragraph Generation.20.402019
Hierarchy Parsing For Image Captioning150.572019
SEALDB: An Efficient LSM-tree based KV Store on SMR Drives with Sets and Dynamic Bands20.372019
Learning Spatio-Temporal Representation With Local And Global Diffusion80.442019
Pointing Novel Objects In Image Captioning40.412019
Editorial to Special Issue on Deep Learning for Intelligent Multimedia Analytics.00.342019
Deep Learning–Based Multimedia Analytics: A Review10.392019
vireoJD-MM at Activity Detection in Extended Videos.00.342019
daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices70.442019
Transferrable Prototypical Networks For Unsupervised Domain Adaptation200.532019
Exploring Object Relation In Mean Teacher For Cross-Domain Detection130.492019
  • 1
  • 2