Title
Feature Pyramid Transformer
Abstract
Feature interactions across space and scales underpin modern visual recognition systems because they introduce beneficial visual contexts. Conventionally, spatial contexts are passively hidden in the CNN’s increasing receptive fields or actively encoded by non-local convolution. Yet, the non-local spatial interactions are not across scales, and thus they fail to capture the non-local contexts of objects (or parts) residing in different scales. To this end, we propose a fully active feature interaction across both space and scales, called Feature Pyramid Transformer (FPT). It transforms any feature pyramid into another feature pyramid of the same size but with richer contexts, by using three specially designed transformers in self-level, top-down, and bottom-up interaction fashion. FPT serves as a generic visual backbone with fair computational overhead. We conduct extensive experiments in both instance-level (i.e., object detection and instance segmentation) and pixel-level segmentation tasks, using various backbones and head networks, and observe consistent improvement over all the baselines and the state-of-the-art methods (Code is open-sourced at https://github.com/ZHANGDONG-NJUST).
Year
DOI
Venue
2020
10.1007/978-3-030-58604-1_20
European Conference on Computer Vision
Keywords
DocType
Citations 
Feature pyramid,Visual context,Transformer,Object detection,Instance segmentation,Semantic segmentation
Conference
4
PageRank 
References 
Authors
0.40
3
6
Name
Order
Citations
PageRank
Dong Zhang151.13
Hanwang Zhang2196578.34
Jinhui Tang35180212.18
Meng Wang416112.17
Xian-Sheng Hua56566328.17
Sun Qianru622719.41