Abstract | ||
---|---|---|
We propose a learning framework named Feature Fusion Learning (FFL) that efficiently trains a powerful classifier through a fusion module which combines the feature maps generated from parallel neural networks and generates meaningful feature maps. Specifically, we train a number of parallel neural networks as sub-networks, then we combine the feature maps from each sub-network using a fusion module to create a more meaningful feature map. The fused feature map is passed into the fused classifier for overall classification. Unlike existing feature fusion methods, in our framework, an ensemble of sub-network classifiers transfers its knowledge to the fused classifier and then the fused classifier delivers its knowledge back to each sub-network, mutually teaching one another in an online-knowledge distillation manner. This mutually teaching system not only improves the performance of the fused classifier but also obtains performance gain in each sub-network. Moreover, our model is more beneficial than other alternative methods because different types of network can be used for each sub-network. We have performed a variety of experiments on multiple datasets such as CIFAR-10, CIFAR-100 and ImageNet and proved that our method is more effective than other alternative methods in terms of performances of both sub-networks and the fused classifier, and the aspect of generating meaningful feature maps. The code is available at this link(1) |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/ICPR48806.2021.9412615 | 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) |
DocType | Volume | ISSN |
Journal | abs/1904.09058 | 1051-4651 |
Citations | PageRank | References |
1 | 0.34 | 0 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jangho Kim | 1 | 4 | 3.41 |
Minsung Hyun | 2 | 1 | 0.34 |
Inseop Chung | 3 | 1 | 1.36 |
Nojun Kwak | 4 | 862 | 63.79 |