Abstract | ||
---|---|---|
We present Deep Neural Decision Forests - a novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner. To combine these two worlds, we introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network. Our model differs from conventional deep networks because a decision forest provides the final predictions and it differs from conventional decision forests since we propose a principled, joint and global optimization of split and leaf node parameters. We show experimental results on benchmark machine learning datasets like MNIST and ImageNet and find on-par or superior results when compared to state-of-the-art deep models. Most remarkably, we obtain Top5-Errors of only 7.84%/6.38% on ImageNet validation data when integrating our forests in a single-crop, single/seven model GoogLeNet architecture, respectively. Thus, even without any form of training data set augmentation we are improving on the 6.67% error obtained by the best GoogLeNet architecture (7 models, 144 crops). |
Year | DOI | Venue |
---|---|---|
2015 | 10.1109/ICCV.2015.172 | IJCAI |
DocType | Volume | Issue |
Conference | 2015 | 1 |
ISSN | Citations | PageRank |
1550-5499 | 49 | 1.59 |
References | Authors | |
28 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Peter Kontschieder | 1 | 376 | 21.10 |
Fiterau, Madalina | 2 | 54 | 4.87 |
Antonio Criminisi | 3 | 6801 | 394.29 |
Samuel Rota Bulò | 4 | 564 | 33.69 |