Abstract | ||
---|---|---|
Remote sensing image scene classification is one of the hottest topics in high spatial resolution remote sensing image understanding.The complexity of spatial distribution and structure patterns of objects in high resolution remote sensing images make the problem challenging.Deep learning methods represented by convolutional neural networks have powerful feature learning capabilities and perform well in scene classification tasks of remote sensing images. Inspired by spatial transformer network (STN), this paper proposes an effective remote sensing image scene classification network the Spatial Transformer Fusion Network (STFN), which applies the spatial transformer network to remote sensing image scene classification task.The structure of STFN uses spatial transformer network to crop remote sensing images to extract the attention areas.Then, STFN extracts features of original images and the cropped images and finally fuse them.Experiments and evaluations are performed on two public remote sensing image datasets: the UC Merced Land-Use dataset with 21 scene categories and the NWPU-RESISC45 dataset with 45 scene categories.Results of experiments show that the proposed method has a relatively simple network structure and can produce competitive classification performance. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/IGARSS39084.2020.9324139 | IGARSS 2020 - 2020 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM |
Keywords | DocType | Citations |
remote sensing, scene classification, convolution neural network, spatial transformer network, spatial transformer fusion network | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 6 |