首页 | 本学科首页   官方微博 | 高级检索  
     


Two-view correspondence learning via complex information extraction
Authors:Jun  Chen  Yue  Gu  Linbo  Luo  Wenping  Gong  Yong  Wang
Affiliation:1.School of Automation, China University of Geosciences, Wuhan, 430074, China
;2.Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan, 430074, China
;3.Engineering Research Center of Intelligent Technology for Geo-Exploration, Ministry of Education, Wuhan, 430074, China
;4.School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan, 430074, China
;5.Faculty of Engineering, China University of Geosciences, Wuhan, 430074, China
;
Abstract:

Establishing reliable correspondences plays a vital role in many feature-matching based computer vision tasks. Given putative correspondences of feature points in two images, in this paper, we propose a novel network for inferring the probabilities of correspondences being inliers or outliers and regressing the relative pose encoded by the essential matrix. Previous research proposed an end-to-end permutation-equivariant classification network based on multi-layer perceptrons and context normalization. However, the context normalization treats each correspondence equally and ignore the extraction of channel information, as a result the representation capability of potential inliers can be reduced. To solve this problem, we apply attention mechanism in our network to capture complex information of the feature maps. Specifically, we introduce two types of attention blocks. We adopt the spatial attention block to capture complex spatial contextual information, and the rich channel information can be obtained by utilizing the channel attention block. To obtain richer contextual information and feature maps with stronger representative capacity, We combine these attention blocks with the PointCN block to form a new network with strong representative ability. Experimental results on several benchmark datasets show that the performance on outlier removal and camera pose estimation is significantly improved over the state-of-the-arts.

Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号