Tracking Most Significant Arm Switches in Bandits. | 0 | 0.34 | 2022 |
Conference on Learning Theory, COLT 2021, 15-19 August 2021, Boulder, Colorado, USA. | 0 | 0.34 | 2021 |
Quickshift++: Provably Good Initializations for Sample-Based Mean Shift. | 1 | 0.36 | 2018 |
An Adaptive Strategy for Active Learning with Smooth Decision Boundary. | 0 | 0.34 | 2018 |
MARGINAL SINGULARITY AND THE BENEFITS OF LABELS IN COVARIATE-SHIFT | 0 | 0.34 | 2018 |
PAC-Bayes Tree - weighted subtrees with guarantees. | 0 | 0.34 | 2018 |
Adaptivity to Noise Parameters in Nonparametric Active Learning. | 0 | 0.34 | 2017 |
Time-accuracy tradeoffs in kernel prediction: controlling prediction quality | 0 | 0.34 | 2017 |
Gradients Weights improve Regression and Classification. | 0 | 0.34 | 2016 |
Hierarchical Label Queries with Data-Dependent Partitions | 2 | 0.38 | 2015 |
A Consistent Estimator Of The Expected Gradient Outerproduct | 2 | 0.39 | 2014 |
Consistent Procedures for Cluster Tree Estimation and Pruning | 11 | 0.82 | 2014 |
Optimal rates for k-NN density and mode estimation. | 4 | 0.48 | 2014 |
Regression-tree Tuning in a Streaming Setting. | 6 | 0.44 | 2013 |
Consistency of Causal Inference under the Additive Noise Model. | 3 | 0.41 | 2013 |
Adaptivity to Local Smoothness and Dimension in Kernel Regression. | 7 | 0.49 | 2013 |
Which spatial partition trees are adaptive to intrinsic dimension? | 21 | 0.89 | 2012 |
Gradient Weights help Nonparametric Regressors. | 2 | 0.46 | 2012 |
A tree-based regressor that adapts to intrinsic dimension | 12 | 0.78 | 2012 |
Pruning nearest neighbor cluster trees | 5 | 0.52 | 2011 |
k-NN Regression Adapts to Local Intrinsic Dimension. | 6 | 0.87 | 2011 |
Fast, smooth and adaptive regression in metric spaces. | 4 | 0.67 | 2009 |
Escaping the Curse of Dimensionality with a Tree-based Regressor | 6 | 0.89 | 2009 |