Title
Multi-Teacher Knowledge Distillation For Compressed Video Action Recognition On Deep Neural Networks
Abstract
Recently, convolutional neural networks (CNNs) have seen great progress in classifying images. Action recognition is different from still image classification; video data contains temporal information that plays an important role in video understanding. Currently, most CNN-based approaches for action recognition have excessive computational costs, with an explosion of parameters and computation time. The currently most efficient method trains a deep network directly on compressed video containing the motion information. However, this method has a large number of parameters. We propose a multi-teacher knowledge distillation framework for compressed video action recognition to compress this model. With this framework, the model is compressed by transferring the knowledge from multiple teachers to a single small student model. With multi-teacher knowledge distillation, students learn better than with single-teacher knowledge distillation. Experiments show that we can reach a 2.4x compression rate in a number of parameters and a 1.2x computation reduction with 1.79% loss of accuracy on the UCF-101 dataset and 0.35% loss of accuracy on the HMDB51 dataset.
Year
DOI
Venue
2019
10.1109/icassp.2019.8682450
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Keywords
Field
DocType
Deep Convolutional Model Compression, Action Recognition, Knowledge Distillation, Transfer Learning
Data compression ratio,Pattern recognition,Convolutional neural network,Computer science,Distillation,Linear programming,Artificial intelligence,Knowledge engineering,Artificial neural network,Contextual image classification,Machine learning,Computation
Conference
ISSN
Citations 
PageRank 
1520-6149
2
0.36
References 
Authors
0
3
Name
Order
Citations
PageRank
Meng-Chieh Wu140.72
Ching-Te Chiu230438.60
Kun-Hsuan Wu320.36