Abstract | ||
---|---|---|
This paper proposes new features extracted from images derived from optical flow, for first-person activity recognition. Features from convolutional neural network (CNN), which is designed for 2D images, attract attention from computer vision researchers due to its powerful discrimination capability, and recently a convolutional neural network for videos, called C3D (Convolutional 3D), was proposed. Generally CNN / C3D features are extracted directly from original images/videos with pre-trained convolutional neural network, since the network was trained with images /videos. In this paper, on the other hand, we propose the use of images derived from optical flow (we call this image as " optical flow image") as input images into the pre-trained neural network, based on the following reasons; (i) optical flow images give dynamic information which is useful for activity recognition, compared with original images, which give only static information, and (ii) the pre-trained network has chance to extract features with reasonable discrimination capability, since the network was trained with huge amount of images from big categories. We carry out experiments with a dataset named " DogCentric Activity Dataset", and we show the effectiveness of the extracted features. |
Year | Venue | Field |
---|---|---|
2015 | 2015 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII) | Computer vision,Activity recognition,Pattern recognition,Convolutional neural network,Computer science,Time delay neural network,Artificial intelligence,Artificial neural network,Optical flow |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Asamichi Takamine | 1 | 0 | 0.34 |
Yumi Iwashita | 2 | 212 | 23.59 |
Ryo Kurazume | 3 | 622 | 74.18 |