Title
AMDCNet: An attentional multi-directional convolutional network for stereo matching
Abstract
Stereo matching refers to finding the correspondence of a point in the real world between two different storage mediums (e.g., intensity images, depth images, three-dimensional points). There are existing stereo matching methods in the literature, but they exhibit two shortcomings. Firstly, during the feature region extraction of stereo matching, these methods require measuring the distance of regions, but measuring the texture distribution of the region is difficult and might lead to the failure of matching. Secondly, the templates used in these methods are rectangles with a fixed size, while most of the natural images exhibit rich information and are more suitable for flexible templates. In this paper, we propose an attentional multi-directional convolutional network (AMDCNet) for circumventing these issues. Our AMDCNet approach consists of three stages: extract the visual sensitivity factor, construct the multi-directional aggregation template and utilize left–right consistency detection to optimize. We evaluate our approach using standard images in the Middlebury test dataset, Scene Flow and KITTI 2015. Experimental results show that AMDCNet can reduce the mismatch rate, and also show significant improvement in accuracy compared with some classical method. In some scenarios, it surpasses some advanced methods based on deep learning. The model code, dataset, and results of the experiments in this paper are available at: https://github.com/WangHewei16/Attentional-Multi-Directional-Convolution-Network.
Year
DOI
Venue
2022
10.1016/j.displa.2022.102243
Displays
Keywords
DocType
Volume
Stereo matching,Convolution aggregation network,Visual sensitive,Cost aggregation
Journal
74
ISSN
Citations 
PageRank 
0141-9382
0
0.34
References 
Authors
9
6
Name
Order
Citations
PageRank
Hewei Wang100.34
Yijie Li200.34
Shijia Xi300.34
Shaofan Wang41110.04
Muhammad Salman Pathan500.34
Soumyabrata Dev66213.94