Name
Affiliation
Papers
ROLAND MEMISEVIC
Department of Computer Science|University of Toronto
50
Collaborators
Citations 
PageRank 
77
1116
65.87
Referers 
Referees 
References 
2602
629
544
Search Limit
1001000
Title
Citations
PageRank
Year
Fine-grained Video Classification and Captioning.10.352018
Evaluating visual "common sense" using fine-grained classification and captioning tasks.00.342018
Generating images with recurrent adversarial networks.363.492016
Regularizing RNNs by Stabilizing Activations00.342016
Incorporating long-range consistency in CNN-based texture generation.10.352016
Architectural Complexity Measures of Recurrent Neural Networks.210.992016
Conservativeness of Untied Auto-Encoders.10.352016
Deep Learning Vector Quantization.00.342016
EmoNets: Multimodal deep learning approaches for emotion recognition in video.651.782016
Neural Networks with Few Multiplications00.342015
On Using Very Large Target Vocabulary For Neural Machine Translation22310.442015
The Potential Energy of an Autoencoder140.662015
Recurrent Neural Networks for Emotion Recognition in Video.611.802015
RATM: Recurrent Attentive Tracking Model150.592015
Zero-bias autoencoders and the benefits of co-adapting features00.342015
Montreal Neural Machine Translation Systems for WMT'15.00.342015
Deep learning: Architectures, algorithms, applications00.342015
Real-time activity recognition via deep learning of motion features.00.342015
Montreal Neural Machine Translation Systems for WMT'15.261.192015
Regularizing RNNs by Stabilizing Activations10.352015
Learning Visual Odometry with a Convolutional Network.210.792015
Dropout as data augmentation.120.712015
How far can we go without convolution: Improving fully-connected networks.30.462015
Denoising Criterion for Variational Auto-Encoding Framework90.782015
Modeling sequential data using higher-order relational features and predictive training.50.462014
A unified approach to learning depth and motion features00.342014
The role of spatio-temporal synchrony in the encoding of motion00.342014
Zero-bias autoencoders and the benefits of co-adapting features.00.342014
Modeling Deep Temporal Dependencies with Recurrent Grammar Cells"".332.062014
Combining modality specific deep neural networks for emotion recognition in video1033.022013
The role of spatio-temporal synchrony in the encoding of motion.00.342013
Feature grouping from spatially constrained multiplicative interaction00.342013
Learning to relate images.442.282013
Feature grouping from spatially constrained multiplicative interaction10.352013
Learning invariant features by harnessing the aperture problem.30.502013
On autoencoder scoring.40.422013
Unsupervised learning of depth and motion.131.402013
Shared Kernel Information Embedding for Discriminative Inference180.832012
On multi-view feature learning80.622012
Learning to relate images: Mapping units, complex cells and simultaneous eigenspaces20.432011
Gradient-based learning of higher-order image features311.372011
Gated Softmax Classification.210.982010
Learning to represent spatial transformations with factored higher-order Boltzmann machines.1107.132010
Shared Kernel Information Embedding For Discriminative Inference241.132009
Unsupervised Learning of Image Transformations856.772007
Learning to solve QBF271.212007
Kernel information embeddings111.202006
Principal surfaces from unsupervised kernel regression.412.392005
Improving dimensionality reduction with spectral gradient descent.50.572005
Multiple Relational Embedding171.302004