GACT: Activation Compressed Training for Generic Network Architectures. | 0 | 0.34 | 2022 |
Fast Lossless Neural Compression with Integer-Only Discrete Flows. | 0 | 0.34 | 2022 |
Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching. | 0 | 0.34 | 2022 |
Implicit Normalizing Flows | 0 | 0.34 | 2021 |
VFlow: More Expressive Generative Flows with Variational Data Augmentation | 0 | 0.34 | 2020 |
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks | 0 | 0.34 | 2020 |
SaberLDA: Sparsity-Aware Learning of Topic Models on GPUs | 0 | 0.34 | 2020 |
Towards Training Probabilistic Topic Models on Neuromorphic Multi-chip Systems. | 0 | 0.34 | 2018 |
Stochastic Training of Graph Convolutional Networks with Variance Reduction. | 22 | 0.75 | 2018 |
Stochastic Expectation Maximization with Variance Reduction. | 2 | 0.37 | 2018 |
Scalable training of hierarchical topic models | 3 | 0.38 | 2018 |
ZhuSuan: A Library for Bayesian Deep Learning. | 5 | 0.42 | 2017 |
Population Matching Discrepancy and Applications in Deep Learning. | 2 | 0.37 | 2017 |
Scalable Inference for Nested Chinese Restaurant Process Topic Models. | 0 | 0.34 | 2017 |
Streaming Gibbs Sampling for LDA Model | 5 | 0.44 | 2016 |
TopicPanorama: A Full Picture of Relevant Topics. | 8 | 0.39 | 2016 |
Scaling up Dynamic Topic Models. | 8 | 0.67 | 2016 |
Distributing the Stochastic Gradient Sampler for Large-Scale LDA | 6 | 0.41 | 2016 |
WarpLDA: a Cache Efficient O(1) Algorithm for Latent Dirichlet Allocation. | 19 | 0.74 | 2016 |
Dropout Training for SVMs with Data Augmentation | 1 | 0.48 | 2015 |
Bayesian Max-margin Multi-Task Learning with Data Augmentation. | 9 | 0.52 | 2014 |
TopicPanorama: A full picture of relevant topics | 32 | 0.87 | 2014 |
Big Learning with Bayesian methods | 10 | 0.58 | 2014 |
Scalable Inference for Logistic-Normal Topic Models. | 16 | 0.66 | 2013 |