Title
Hyperspectral Image Classification via Spectral Pooling and Hybrid Transformer
Abstract
Hyperspectral images (HSIs) contain spatially structured information and pixel-level sequential spectral attributes. The continuous spectral features contain hundreds of wavelength bands and the differences between spectra are essential for achieving fine-grained classification. Due to the limited receptive field of backbone networks, convolutional neural networks (CNNs)-based HSI classification methods show limitations in modeling spectral-wise long-range dependencies with fixed kernel size and a limited number of layers. Recently, the self-attention mechanism of transformer framework is introduced to compensate for the limitations of CNNs and to mine the long-term dependencies of spectral signatures. Therefore, many joint CNN and Transformer architectures for HSI classification have been proposed to obtain the merits of both networks. However, these architectures make it difficult to capture spatial-spectral correlation and CNNs distort the continuous nature of the spectral signature because of the over-focus on spatial information, which means that the transformer can easily encounter bottlenecks in modeling spectral-wise similarity and long-range dependencies. To address this problem, we propose a neighborhood enhancement hybrid transformer (NEHT) network. In particular, a simple 2D convolution module is adopted to achieve dimensionality reduction while minimizing the distortion of the original spectral distribution by stacked CNNs. Then, we extract group-wise spatial-spectral features in a parallel design to enhance the representation capability of each token. Furthermore, a feature fusion strategy is introduced to increase subtle discrepancies of spectra. Finally, the self-attention of transformer is employed to mine the long-term dependencies between the enhanced feature sequences. Extensive experiments are performed on three well-known datasets and the proposed NEHT network shows superiority over state-of-the-art (SOTA) methods. Specifically, our proposed method outperforms the SOTA method by 0.46%, 1.05% and 0.75% on average in overall accuracy, average accuracy and kappa coefficient metrics.
Year
DOI
Venue
2022
10.3390/rs14194732
REMOTE SENSING
Keywords
DocType
Volume
hyperspectral image (HSI) classification, convolutional neural networks (CNNs), self-attention mechanism, subtle discrepancy, feature fusion strategy
Journal
14
Issue
ISSN
Citations 
19
2072-4292
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Chen Ma100.34
Junjun Jiang2113874.49
Huayi Li300.68
xiaoguang mei410315.35
Chengchao Bai502.03