Title | ||
---|---|---|
Knowledge Distillation Inspired Fine-Tuning Of Tucker Decomposed Cnns And Adversarial Robustness Analysis |
Abstract | ||
---|---|---|
The recent works in tensor decomposition of convolutional neural networks have paid little attention to fine-tuning the decomposed models more effectively. We propose to improve the accuracy as well as the adversarial robustness of decomposed networks over existing non-iterative methods by distilling knowledge from the computationally intensive undecomposed (teacher) model to the decomposed (student) model. Through a series of experiments, we demonstrate the effectiveness of knowledge distillation with different loss functions and compare it to the existing fine-tuning strategy of minimizing cross-entropy loss with ground truth labels. Finally, we conclude that the student networks obtained by the proposed approach are superior not only in terms of accuracy but also adversarial robustness, which is often compromised in the existing methods. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/ICIP40778.2020.9190672 | 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) |
Keywords | DocType | ISSN |
Tucker Decomposition, Accelerating CNNs, Network Decomposition, Knowledge Distillation, Adversarial Robustness | Conference | 1522-4880 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ranajoy Sadhukhan | 1 | 0 | 0.34 |
Avinab Saha | 2 | 0 | 0.34 |
Jayanta Mukherjee | 3 | 378 | 56.06 |
Amit Patra | 4 | 97 | 25.22 |