Abstract | ||
---|---|---|
Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation, where we improve state-of-the-art from 49.7 mean $\\mathrm{AP}^r$ to 62.4, keypoint localization, where we get a 3.3 point boost over a strong regression baseline using CNN features, and part labeling, where we show a 6.6 point gain over a strong baseline. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/TPAMI.2016.2578328 | IEEE Trans. Pattern Anal. Mach. Intell. |
Keywords | Field | DocType |
Image segmentation,Semantics,Object detection,Proposals,Labeling,Nonlinear optics,Optical imaging | Computer vision,Object detection,Scale-space segmentation,Pattern recognition,Computer science,Segmentation,Segmentation-based object categorization,Image segmentation,Artificial intelligence,Pixel,Connected-component labeling,Semantics | Journal |
Volume | Issue | ISSN |
39 | 4 | 0162-8828 |
Citations | PageRank | References |
14 | 0.80 | 42 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bharath Hariharan | 1 | 1052 | 65.90 |
Pablo Arbelaez | 2 | 3626 | 173.00 |
Ross B. Girshick | 3 | 21921 | 927.22 |
Jitendra Malik | 4 | 39445 | 3782.10 |